Mini map of s-risks

S-risks are risks of fu­ture global in­finite suffer­ings. Foun­da­tional re­search in­sti­tute sug­gested them as the most se­ri­ous class of ex­is­ten­tial risks, even more se­ri­ous than painless hu­man ex­tinc­tion. So it is time to ex­plore types of s-risks and what to do about them.

Pos­si­ble causes and types of s-risks:

“Nor­mal Level”—some forms of ex­treme global suffer­ing ex­ist now, but we ig­nore them:

1. Aging, loss of loved ones, moral ill­ness, in­finite suffer­ings, dy­ing, death and non-ex­is­tence—for al­most ev­ery­one, be­cause hu­mans are mor­tal

2. Na­ture as a place of suffer­ing, where an­i­mals con­stantly eat each other. Evolu­tion as su­per­in­tel­li­gence, which cre­ated suffer­ing and us­ing it for its own ad­vance.

Colos­sal level:

1. Quan­tum im­mor­tal­ity cre­ates bad im­mor­tal­ity—I sur­vived as old, but always dy­ing per­son, be­cause of weird ob­ser­va­tion se­lec­tion.

2. AI goes wrong. 2.1 Ro­cobasilisk 2.2. Er­ror in pro­gram­ming 2.3. Hacker’s joke 2.4 In­dex­i­cal black­mail.

3. Two AIs go in war with each other, and one of them is benev­olent to hu­man, so an­other AI tor­tures hu­mans to get bar­gain po­si­tion in the fu­ture deal.

4. X-risks, which in­cludes in­finite suffer­ing for ev­ery­one—nat­u­ral pan­demic, can­cer epi­demic etc

5. Pos­si­ble wor­lds (in Lewis terms) with in­finite suffer­ings qualia in them. For any hu­man a pos­si­ble world with his in­finite suffer­ings ex­ist. Mo­dal re­al­ism makes them real.

Ways to fight s-risks:

1. Ig­nore them by box­ing per­sonal iden­tity in­side to­day

2. Benev­olent AI fights “mea­sure war” to cre­ate in­finitely more copies of happy be­ings, as well as tra­jec­to­ries in the space of the pos­si­ble minds from suffer­ings to hap­piness

Types of most in­ten­sive suffer­ings:

Qualia based, listed from bad to worse:

1. Eter­nal, but bear­able in each mo­ment suffer­ing (An­he­do­nia)

2. Un­bear­able suffer­ings—suffer­ings, to which death is the prefer­able out­come (can­cer, death in fire, death by hang­ing). How­ever, as said Mark Aure­lius: “Un­bear­able pain kills. If it not kills, it is bear­able”

3. In­finite suffer­ing—qualia of the in­finite pain, so the du­ra­tion doesn’t mat­ter (not known if it ex­ists)

4. In­finitely grow­ing eter­nal suffer­ings, cre­ated by con­stant up­grade of the suffer­ing’s sub­ject (hy­po­thet­i­cal type of suffer­ings cre­ated by malev­olent su­per­in­tel­li­gence)

Value based s-risks:

1. Most vi­o­lent ac­tion against one’s main val­ues: like “bru­tal mur­der of chil­dren”

2. Mean­ingless­ness, acute ex­is­ten­tial ter­ror or de­re­al­i­sa­tion with de­pres­sion (Nabokov’s short story “Ter­ror”) - in­cur­able and log­i­cally proved un­der­stand­ing of mean­ingless of life

3. Death and non-ex­is­tence are forms of counter-value suffer­ings.


1. In­finite time with­out hap­piness.

Sub­jects, who may suffer from s-risks:

1. Any­one as in­di­vi­d­ual per­son

2. Cur­rently liv­ing hu­man pop­u­la­tion

3. Fu­ture gen­er­a­tion of hu­mans

4. Sapi­ent be­ings

5. An­i­mals

6. Com­put­ers, neu­ral nets with re­in­force­ment learn­ing, robots and AIs.

7. Aliens

8. Unem­bod­ied suffer­ings in stones, Boltz­mann brains, pure qualia etc.

My position

It is im­por­tant to pre­vent s-risks, but not by in­creas­ing prob­a­bil­ity of hu­man ex­tinc­tion, as it would mean that we already fail vic­tims of black­mail by non-ex­is­tence things.

Also s-risk is already de­fault out­come for any­one per­son­ally (so it is global), be­cause of in­evitable ag­ing and death (and may be bad quan­tum im­mor­tal­ity).

Peo­ple pre­fer the illu­sive cer­tainty of non-ex­is­tence—to hy­po­thet­i­cal pos­si­bil­ity of in­finite suffer­ings. But noth­ing is cer­tain af­ter death.

The same way over­es­ti­mat­ing of the an­i­mal suffer­ing re­sults in the un­der­es­ti­mat­ing of the hu­man suffer­ings and risks of hu­man ex­tinc­tion. But an­i­mals are more suffer­ing in the forests than in the an­i­mal farms, where they are feed ev­ery day, get ba­sic health­care, there no preda­tors, who will eat them al­ive etc.

The hopes are wrong that we will pre­vent fu­ture in­finite suffer­ings if we stop progress or com­mit suicide on the per­sonal or civilza­tional level. It will not help an­i­mals. It will not help in suffer­ings in the pos­si­ble world. It even will not pre­vent suffer­ings af­ter death, if quan­tum im­mor­tal­ity in some form is true.

But the fear of in­finite suffer­ings makes us vuln­er­a­ble to any type of the “acausal” black­mail. The only way to fight suffer­ings in pos­si­ble wor­lds is to cre­ate an in­finitely larger pos­si­ble world with hap­piness.