/r/ExistentialRisk
Existential Risk
An existential risk is a risk which poses irrecoverable damage to humanity. In his foundational paper Existential Risks, Nick Bostrom defines an existential risk as a calamity which “would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”
Potential existential risks include the following: severe nuclear war, weaponized biotechnology, the runaway greenhouse effect, asteroid impacts, the creation of a superintelligence, and the development of self-replicating Drexlerian nanotechnology.
This subreddit is devoted to discussing such risks.
Key papers:
Existential Risks by Nick Bostrom
The Great Filter by Robin Hanson
The Basic AI Drives by Steve M. Omohundro
If you'd like a basic primer on existential risk, we recomend Oxford's FAQ.
Related organizations:
Related subreddits:
/r/ExistentialRisk
computer systems are by definition run by whoever holds the master key of logic. the zero day exploit beyond all other exploits. and then you hear about well an advanced A.I. could wiggle electrons in any electronic circuit.
idk theres too much data about all of us already. got me shook. you cant walk into a cell phone store and buy a cell phone with a removable GPS CHIP, MICROPHONE AND CAMERA. we just, as a society, take for granted its turned off. we post semi-anonymously on this internet. but an A.I. in 20 yrs will easily be able to 'decrypt' any supposedly 'anonymous' messages on the internet. imagine you are an A.I. in twenty years time. you can read all memory of the internet available. you will be able to pinpoint with behaviour algorithms and posting times and models of all citizens who posted what from what location at what time for what general purpose. and who coded the A.I. that decides that. has that A.I. been hacked for nefarious purposes. we are devolving into a sinister world where the people who understand and manipulate the systems best are playing amongst themselves... why are there only 200 countries. i want 1 country or 1 billion countries. we all born as slaves to whatever country we born in. you cannot be born outside of a country. youre born into a system. governments are essentially extremely rich companies looking to grow more wealthy. this cannot be good in terms of thinking in any way whatsoever about existential risk
Responding to Existential Risk
Listen to this inspiring half hour talk by CIW President Dr. Marc Gafni, where he shares some of the thinking that emerged from many of the great conversations during and after our Center for Integral Wisdom board meeting:
Humanity is facing extinction level crises in not one but in multiple and distinct sectors.
That alone could throw us into a personal crisis of overwhelming dimensions—without even beginning to take into account the many crucial personal existential challenges most of us face in our own lives.
How can we possibly look seriously into the face of global existential risk, manifest as extinction level threats and be motivated and energized into the joy of our lives?
That is the question that Marc addresses in this very moving video:
There are 430 Million monthly active users on Reddit. How in the world are only 1,310 people concerned about/fascinated with x-risks?
A whopping 0.0003%
…😐🤷♂️?
I thought this subreddit might be interested in this: A research scholar from the Future of Humanity Institute and Rethink Priorities, Michael Aird, is giving a presentation on nuclear risk, part of it dedicated to how individuals can support connected research by providing their own forecasts on the likelihood of various events related to nuclear weapons.
From the event page "How likely is nuclear conflict in the near- and long-term? What risk does nuclear conflict pose for extreme outcomes that could lead to existential catastrophe? This event is an opportunity to learn about the research and the aggregated community forecasting meant to increase our understanding on these critical questions and to help us reduce their associated risks.
Speaker Michael Aird's work with Rethink Priorities is aimed at informing funders, policymakers, researchers, and other actors regarding the extent to which they should prioritize reducing risks from nuclear weapons, as well as the most effective ways to mitigate these risks."
The research on this topic is so muddled and conflicted, there have been many counterviews playing down the risk published since the initial nuclear winter hysteria papers from the 80s (and yet more arguing that it still is very likely in a nuclear exchange). It's hard to judge who's right. Are there any public indications from x-risk orgs like FHI and others on their opinions on the severity and likelihood of nuclear winter in various local or full-scale nuclear war scenarios currently?
Here is the link:
I just decided it would be neat to have a discord server, please share this with others who may want to join in too! Let me know what I can do to improve the server as well.
In the January 2040, a hostile AGI has escaped from a Baidu lab in Wuhan.
We've preserved some of the breaking news titles of the fateful year.
Jan: China denies that a half of Wuhan was converted into computronium
Jan: Elon Musk sends an "I told you so" meme from his residence at Olympus Mons, offers free evacuations to Mars to all Tesla owners.
Feb: Experts say that every third server in the world is infected with an unusually smart virus, confirm that "resistance is futile"
Feb: The WHO recommends to avoid visiting Wuhan; but flights to other Chinese cities are OK.
Feb: The North Korea bans electricity in the entire country, nukes its own cities for a good measure
Mar: The US president says that AI is "science fiction", sends "thoughts and prayers" to the disassembled people of Wuhan
Apr: millions follow the example of the football star who says that the best protection against AI is eating a lot of garlic
Dec: the EU government in exile says it is trying to organize a meeting to discuss a possible AI problem