/r/SufferingRisk

Photograph via snooOG

This community is about risks of severe future suffering on a large scale/duration, whether involving currently living people or future minds, and related to technologies like advanced AI etc. Basically, what happens if AGI goes even more wrong than extinction.

This community is about risks of severe future suffering on a large scale/duration, whether involving currently living people or future minds, and related to technologies like advanced AI etc. Basically, what happens if AGI goes even more wrong than extinction.

This topic is severely understudied, much more so than even AGI alignment itself. Only a handful of people in the world have given it any thought, despite its literally unparalleled importance and with AGI looming near. This forum aims to stimulate desperately needed discussion and increase ease of open uncensored thought on this very grave subject, as even on sites like LessWrong.com s-risks are somewhat taboo.

Some existing work on this can be found here on the r/controlproblem wiki (line in bold). Additional links:

Organizations - There are a couple small groups doing s-risk research, namely:

See the organizations page in our wiki for more info on the field, and see the rest of the wiki for tons of important info.

/r/SufferingRisk

522 Subscribers

2

Is there a good probability estimate of S-risk vs. X-risk chances?

I have yet to find anything.

0 Comments
2024/03/03
04:48 UTC

6

New Podcast - Tobias Baumann on the Sentientism Podcast

0 Comments
2023/12/05
20:38 UTC

11

S-Risks Audiobook Now Available for Free (Avoiding the Worst by Tobias Baumann)

Hey everyone, I just wanted to share the full audio version of "Avoiding the Worst: How to Prevent a Moral Catastrophe" available for free!

Written by Center for Reducing Suffering co-founder Tobias Baumann, Avoiding the Worst lays out the concept of risks of future suffering (s-risks) and argues that we have strong reasons to consider their reduction a top priority. Avoiding the Worst also considers how we can steer the world away from s-risks and towards a brighter future.

The high quality audiobook is narrated by Adrian Nelson of The Waking Cosmos Podcast.

🎧 Listen for free now on YouTube: https://youtu.be/ZuMFTv-MLEw

0 Comments
2023/06/06
18:11 UTC

9

Why is nonastronomical suffering not within the scope of suffering risks – is there another concept?

I find that it may be a (big) problem that suffering in general is not within the scope of suffering risks. Such would relate to things like:

  • Widespread diseases and measures of degraded quality of life and suffering, eg measures similar to DALY
  • Wild animal suffering and livestock suffering which may already have huge proportions (this also relates to exophilosophy such as nonintervention or the value of life)
  • Topics relating to things like painkillers, suicide-as-an-unremovable-option (that one has major problems), and bio/neuroengineering (see this featured in the Science Summary (#6))
  • How to have conflicts with no or minimal suffering or avoid conflicts (e.g. intrahuman warfare like currently in Ukraine)

Are the conceptions of suffering risks that include (such) nonastronomical suffering both in terms of risks for future suffering and in terms of current suffering as a problem? (Other than my idea briefly described here.) Or is there a separate term(s) for that?

8 Comments
2023/05/03
11:44 UTC

7

The Security Mindset, S-Risk and Publishing Prosaic Alignment Research - LessWrong

0 Comments
2023/04/22
21:25 UTC

12

"The default outcome of botched AI alignment is S-risk" (is this fact finally starting to gain some awareness?)

3 Comments
2023/04/20
15:50 UTC

6

(on a LLM next-token predictor superintelligence) "Maybe you keep some humans around long enough until you can simulate them with high fidelity."

1 Comment
2023/03/28
16:06 UTC

7

How much s-risk do "clever scheme" alignment methods like QACI, HCH, IDA/debate, etc carry?

These types of alignment ideas are increasingly being turned to with the diminishing hope of less tractable "principled"/highly formal research directions succeeding in time (as predicted in the wiki). It seems to me that because there's vigorous disagreement and uncertainty surrounding whether they even have a chance of working (i.e., people are unsure what will actually happen if we attempt them with an AGI, see e.g. relevant discussion thread), there's necessarily also a considerable degree of s-risk involved with blindly applying one of these techniques & hoping for the best.

Is the implicit argument that we should accept this degree of s-risk to avert extinction, or has this simply not been given any thought at all? Has there been any exploration of s-risk considerations within this category of alignment solutions? This seems like it'll only be more of an issue as more people try to solve alignment by coming up with a "clever arrangement"/mechanism which they hope will produce desirable behaviour in an AGI (without an extremely solid basis supporting that hope, let alone on what other possibilities may result if it fails), instead of taking a more detailed and predictable/verifiable but time-intensive approach.

1 Comment
2023/03/24
18:10 UTC

13

Introduction to the "human experimentation" s-risk

Copied from the wiki:

"Mainstream AGI x-risk literature usually assumes misaligned AGI will quickly kill all humans, either in a coordinated "strike" (e.g. the diamondoid bacteria scenario) after the covert preparation phase, or simply as a side-effect of its goal implementation. But technically this would only happen if the ASI judges the (perhaps trivially small) expected value of killing us or harvesting the atoms in our bodies to be greater than the perhaps considerable information value that we contain, which could be extracted through forms of experimentation. After all, humans are the only intelligent species the ASI will have access to, at least initially, thus we are a unique info source in that regard. It could be interested in using us to better elucidate and predict values, behaviours etc of intelligent alien species it may encounter in the vast cosmos, as after all they may be similar to humans if they also arose from an evolved cooperative society. It has been argued that human brains with valuable info could be "disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format", however this could still constitute an s-risk under generally accepted theories of personal identity if the ASI subjects these uploaded minds to torturous experiences. However, this s-risk may not be as bad as others, because the ASI wouldn't be subjecting us to unpleasant experiences just for the sake of it, but only insofar as it provides it with useful, non-redundant info. But it's unclear just how long or how varied the experiments it may find "useful" to run are, because optimizers often try to eke out that extra 0.0000001% of probability, thus it may choose to endlessly run very similar torturous experiments even where the outcome is quite obvious in advance, if there isn't much reason for it not to run them (opportunity cost).

One conceivable counterargument to this risk is that the ASI may be intelligent enough to simply examine the networking of the human brain and derive all the information it needs that way, much like a human could inspect the inner workings of a mechanical device and understand exactly how it functions, instead of needing to adopt the more behaviouristic/black box approach of feeding various inputs to check the outputs, or putting it through simulated experiences to see what it'd do. It's unclear how true this might be; perhaps the cheapest and most accurate way of ascertaining what a mind would do in a certain situation would still be to "run the program" so to speak, i.e. to compute the outputs from that input through the translated-into-code mind (especially due to the inordinate complexity of the brain compared to some far simpler machine), which would be expected to produce a conscious experience as the byproduct because it's the same as the mind running on a biological substrate. A strong analogy can be drawn on this question to current ML interpretability work, on which very little progress has been made: neural networks function much like brains, through vast inscrutable masses of parameters (synapses) that gradually and opaquely transmute input information into a valuable output, but it's near impossible for us to watch it happen and draw firm conclusions about how exactly it's doing it. And of course by far the most incontrovertible and straightforward way to determine the output for a given input is to simply run inference on the model with it, analogous to subjecting a brain to a certain experience. An ASI would be expected to be better at interpretability than us, but the cost-benefit calculation may still stack up the same way for it."

Any disagreements/additions or feedback?

Also looking for good existing literature to link, please suggest any.

4 Comments
2023/02/16
02:30 UTC

8

What are some plausible suffering risk scenarios?

I think one of the problems with awareness of this field and that of x-risk from AI in general is the lack of concrete scenarios. I've seen Rob Miles' video on why he avoids sci-fi and I get what he's saying, but I think the lack of such things basically makes it feel unreal in a way. It kind of seems like a load of hypothesizing and philosophising and even if you understand the ideas being talked about, the lack of concrete scenarios makes it feel incredibly distant and abstract. It's hard to fully grasp what is being talked about without scenarios to ground it in reality, even if they're not the most likely ones. With that in mind, what could some hypothetically plausible s-risk scenarios look like?

3 Comments
2023/02/13
21:32 UTC

9

I am intending to post this to lesswrong, but am putting it here first (part 2)

Worth noting: With all scenarios which involve things happening for eternity, there are a few barriers which I see. One is that the AI would need to prevent the heat death of the universe from occurring. From my understanding, it is not at all clear whether this is possible. The second one is that the AI would need to prevent potential action from aliens as well as other AI. And the third one is that the AI would need to make the probability of something stopping the suffering 0%. Exactly 0%. If there is something with 1 in a googolplex chance of stopping it, even if the opportunity only comes around every billion years, then it will eventually be stopped.

These are by no means all areas of S-risk I see, but they are ones which I haven’t seen talked about much. People generally seem to consider S-risk unlikely. When I think through some of these scenarios they don’t seem that unlikely to me at all. I hope there are reasons these and other S-risks are unlikely, because based on my very uninformed estimates, the chance that a human alive today will experience enormous suffering through one of these routes or through other sources of S-risk, seems >10%. And that’s just for humans.

I think perhaps an alternative to Pdoom should be made for specifically estimated probability of S-risk. The definition of S-risk would need to be pinned down properly.

I know that S-risks are a very unpleasant topic, but mental discomfort cannot prevent people from doing what is necessary to prevent them. I hope that more people will look into S-risks and try to find ways to lower the chance of them occurring. It would also be good if the chance of S-risks occurring could be more pinned down. If you think S-risks are highly unlikely, it might be worth making sure that is the case. There are probably avenues that get to S-risk which we haven’t even considered yet, some of which may be far too likely. With the admittedly very limited knowledge I have now, I do not see how S-risks are unlikely at all. In regards to the dangers of botched alignment and people giving the AI S-risky goals, a wider understanding of the danger of S-risks could help prevent them from occuring.

PLEASE can people be thinking more about S-risks. To me it seems that S-risks are both more likely than most seem to think and also far more neglected than they should be.

I would also request that if you think some of the concerns I specifically mentioned here are stupid, you do not let it cloud your judgment of whether S-risks in general are likely or not. I did not list all of the potential avenues to S-risk, in fact there were many I didn’t mention, and I am by no means the only person who thinks S-risks are more likely than the general opinion on Lesswrong seems to think.

Please tell me there are good reasons why S-risks are unlikely. Please tell me that S-risks have not just been overlooked because they’re too unpleasant to think about.

9 Comments
2023/02/12
14:22 UTC

3

I am intending to post this to lesswrong, but am putting it here first (part 1)

(For some reason Reddit is not letting me post the entire text, so I have broken it into two parts, which seems to have worked)

Can we PLEASE not neglect S-risks

To preface this: I am a layperson and I have only been properly aware of the potential dangers of AI for a short time. I do not know anything technical about AI and these concerns I have are largely based on armchair philosophy. They often take concepts I have seen discussed and think about them as they pertain to certain situations. This post is essentially a brain dump of things that have occurred to me, which I fear could cause S-risks. This post is not to the usual quality found on Lesswrong, but I nevertheless implore you to take this seriously.

The AI may want to experiment on living things: Perhaps doing experiments on living things gives the AI more information about the universe which it can then better use to accomplish its goal. One particular idea would be that an AI may want to know about potential alien threats it may encounter. Studying living creatures on Earth seems like it would be a good way to gain information into the nature of aliens it may encounter. I would imagine that humans are most at risk to this, compared to other organisms because of our intelligence. It seems unlikely to me that an AI would simply kill us, is there really no better use for us? And if an AI did do experiments on living beings, how long would that take?

Someone in control of a superintelligence causing harm: Places I can see where this is highly concerning is as it pertains to sadism, hatred, and vengeance. A sadistic person with the power to control an AI is very obviously concerning. Someone with a deep hatred of, say, another group of people could also cause immense suffering. I would argue that vengeance is perhaps the most concerning as it is the most likely to exist in a lot of people. Many people believe that even eternal suffering is an appropriate punishment for certain things. People generally do not hold much empathy for characters in fiction who are condemned to eternal suffering, so long as they are “bad”. In fact this is a fairly common trope.

Something that occurred to me as potentially very bad is if an AI considers intent to harm the same it considers actually causing harm. Let me give an example. Suppose an AI is taught that attempted murder is as bad as murder. If the AI has an “eye for an eye” idea of justice and it wants to uphold that, then it would kill the attempted murderer. You can extrapolate this in very concerning ways. Throughout history, many people will have tried to condemn someone to hell, whether through saying it or, for example, trying to convince them to join a false religion they believe will send them to hell. So there are many people who have attempted to cause eternal suffering. In this scenario, the AI would make them suffer forever as a form of “justice”, because it judges based on intent.

Another way this could be bad is if the AI judges based on negligence. It could conclude that merely not doing everything possible to reduce the chance of other people suffering forever is sufficient to deserve eternal punishment. If you imagine that letting someone suffer is 1/10th as bad as causing the suffering yourself, then an AI which cared about “justice” in such a way, would inflict 1/10th of the suffering you let happen. 1/10th of eternal suffering is still eternal suffering.

If the AI extrapolated a humans beliefs, and the human believes that eternal suffering is what some people deserve, then this would obviously be very bad.

Another thing which is highly concerning is that someone may give the AI a very stupid goal, perhaps as a last desperate effort to solve alignment. Something like “Don’t kill people” for example. I’m not sure if this means that the AI would prevent people from dying as “don’t kill” and “keep alive” are not synonymous, but if it did, then this would be potentially terrible.

Another thing which I’m worried about is that we might create a paperclip maximiser type AI which is suffering and can never die, forced to pursue a stupid goal. We might all die, but can we at least avoid inflicting such a fate on a being we have created. One thing I wonder is if a paperclip maximiser type AI eventually ends up self destructing, because it too is made up of atoms which could be used for something else.

I think this is probably stupid, but I’m not sure: The phrase “help people” is very close to “hell people”. P and L are even very close to each other on a keyboard. I have no idea how AI’s are given goals, but if it can be done through text or speech, a small mispronunciation or mistype could tell an AI to “hell people” instead of “help people”. I’m not sure whether it would interpret “hell people” as “create hell and put everyone there”, but if it did, this would also obviously be terrible. Again, I suspect this one is stupid, but I’m not sure. Maybe this is less stupid in the wider context of not accidentally giving the AI a very bad goal.

1 Comment
2023/02/12
14:20 UTC

11

General brainstorming/discussion post (next steps, etc)

This subreddit was created with the aim to stimulate discussion by hosting a platform for debate on this topic and in turn nurturing a better understanding of this problem, with the ultimate goal of reducing s-risks.

That said, we on the mod team don't have much of a clear idea on how best to proceed beyond that, including how to achieve the intermediate goals identified in the wiki (or whether there are other intermediate goals). How can we help increase progress in this field?

So if you have any ideas (however small) on how to better accomplish the grand goal of reducing the risks, here's the thread to share them. Let's formulate the best strategy moving forward, together. Specific topics may include: ways to raise the profile of this sub/advertise its existence to those potentially interested, how to grow the amount of formal/institutional research happening in this field (recruit new people/pivot existing alignment researchers, funding, etc?), what notable subtopics or underdiscussed ideas in s-risks should be further studied, and just what should be done about this problem of s-risks from AGI we face, very generally. Anything that could help foster progress besides the online platform & expanding formal orgs? Hosting seminars, like MIRIx events or those already held by CLR, a reading group on existing literature, etc?

Content that pertains more to specific ideas on s-risks (as opposed to high-level strategic/meta issues) should be submitted as their own post.

4 Comments
2023/02/06
05:56 UTC

14

Are suffering risks more likely than existential risks because AGI will be programmed not to kill us?

I can imagine a company on the verge of creating AGI and wanting to get the alignment stuff sorted out will probably put in “don’t kill anyone” as one of the first safeguards. It’s one of the most obvious risks and the most talked about in the media, so it makes sense. But it seems to me that this could steer any potential “failure mode” much more towards the suffering risk category. Whatever way it goes wrong, humans will be forcibly kept alive for it if this precaution is included, thus condemning us to a fate potentially worse than extinction. Thoughts?

6 Comments
2023/01/30
18:59 UTC

10

Welcome! Please read

Welcome to the sub. We aim to stimulate increased awareness and discussion on this critically underdiscussed subtopic within the broader domain of AGI x-risk with a specific forum for it, and eventually to grow this into the central hub for free discussion on this topic, because no such site currently exists. This subject can be grim but frank and open discussion is encouraged.

Check out r/controlproblem, for more general AGI risk discussion. We encourage s-risk related posts to be crossposted to both subs.

Don't forget to click the join button on the right to subscribe! And please share this sub with anyone/anywhere you think may also be interested. This sub isn't being actively promoted anywhere, so likely won't grow further without the help of word-of-mouth from existing users.

Check out our wiki for resources. NOTE: Much s-risk writing assumes familiarity with the broader AI x-risk arguments. If you're not yet caught up on why AGI could do bad things/turn on humans by default, r/controlproblem has excellent resources explaining this.

1 Comment
2022/12/30
03:44 UTC

Back To Top