I couldn’t find a good number, but let’s assume Manifold also thinks there’s a 25% chance of doom (everyone is dead) until 2050 given ASI. This leaves:
Multiplying by the overall chance of ASI (65%), the simplified unconditional outcomes are:
ASI with large longevity gains by 2050:0.65×0.43≈28%
ASI with doom by 2050:0.65×0.25≈16%
ASI but no large increase in life expectancy by 2050:0.65×0.32≈21%
No ASI by 2031:1−0.65=35%
This would imply that Manifold believes there to be a 32% chance that ASI by 2031, but by 2050 (19 years later), humanity survives but US life expectancy still hasn’t reached 100+ years.
I recently found a 2-hour interview with Mo Gawdat titled “EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI!” from 2 months ago. It has 6.5 million views which is 1 million more views than Sam Altman’s interview on Lex Fridman.
I’ve only skimmed through it, but it seems that Gawdat frames AI developments as an emergency and is mainly concerned about potential misuse.
I likely won’t have time to actually listen to it for a while – but it seems pretty relevant for the AI Safety community to understand what kind of other extremely popular media narratives about dangers from AI exist.
I wasn’t sure whether it was the right place to post, as I myself didn’t feel able to judge how useful it really is. Thus I didn’t feel comfortable having it as a shortform post.
(Not sure if I got the maths right here.)
Manifold gives two interesting probabilities:
65% chance of ASI before 2031
28% chance that US life expectancy will reach 100+ in 2050.
Using the simplifying assumption that until 2050 dramatic longevity gains happen only if ASI ‘solves ageing’, we have:
P(solve aging∣ASI)=P(life expectancy≥100)P(ASI)=0.280.65≈43%
I couldn’t find a good number, but let’s assume Manifold also thinks there’s a 25% chance of doom (everyone is dead) until 2050 given ASI. This leaves:
P(ASI, no doom, no large increase in life expectancy∣ASI)=1−0.43−0.25≈32%
Multiplying by the overall chance of ASI (65%), the simplified unconditional outcomes are:
ASI with large longevity gains by 2050: 0.65×0.43≈28%
ASI with doom by 2050: 0.65×0.25≈16%
ASI but no large increase in life expectancy by 2050: 0.65×0.32≈21%
No ASI by 2031: 1−0.65=35%
This would imply that Manifold believes there to be a 32% chance that ASI by 2031, but by 2050 (19 years later), humanity survives but US life expectancy still hasn’t reached 100+ years.
I recently found a 2-hour interview with Mo Gawdat titled “EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI!” from 2 months ago. It has 6.5 million views which is 1 million more views than Sam Altman’s interview on Lex Fridman.
I’ve only skimmed through it, but it seems that Gawdat frames AI developments as an emergency and is mainly concerned about potential misuse.
I likely won’t have time to actually listen to it for a while – but it seems pretty relevant for the AI Safety community to understand what kind of other extremely popular media narratives about dangers from AI exist.
Richard Neher and others created a tool to explore scenarios for hospital demand in your community:
http://scratch.neherlab.org/
It’s still in early stages. Source: https://twitter.com/richardneher/status/1236980631789359104
Curious why this was retracted. Do you not think the tool is useful?
I wasn’t sure whether it was the right place to post, as I myself didn’t feel able to judge how useful it really is. Thus I didn’t feel comfortable having it as a shortform post.