5% chance given to Human Level Machine Intelligence (HLMI) having an extremely bad long run impact (e.g. human extinction)
Does Stuart Russell’s argument for why highly advanced AI might pose a risk point at an important problem? 39% say at least important, 70% at least moderately important.
But on the other hand, only 8.4% said working on this problem is now is more valuable than other problems in the field. 28% said as valuable as other problems.
47% agreed that society should prioritize “AI Safety Research” more than it currently was.
These seem like fairly safe lower bounds compared to the population of researchers Rohin would evaluate, since concern regarding safety has increased since 2015 and the survey included all AI researchers rather than only those whose work is related to AGI.
These responses are more directly related to the answer to Question 3 (“Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?”) than Question 2 (“Does X broadly understand the main concerns of the safety community?”). I feel very uncertain about the percentage that would pass Question 2, but think it is more likely to be the “bottleneck” than Question 3.
Given these considerations, I increased the probability before 2023 to 10%, with 8% below the lower bound. I moved the median | not never up to 2035 as a higher probability pretty soon also means a sooner median. I decreased the probability of “never” to 20%, since the “not enough people update on it / consensus building takes forever / the population I chose just doesn’t pay attention to safety for some reason” condition seems not as likely.
I also added an extra bin to ensure that the probability continues to decrease on the right side of the distribution.
I think it’s >1% likely that the one of the first few surveys Rohin conducted would result in a fraction of >0.5.
Evidence from When Will AI Exceed Human Performance?, in the form of median survey responses of researchers who published at ICML and NIPS in 2015:
5% chance given to Human Level Machine Intelligence (HLMI) having an extremely bad long run impact (e.g. human extinction)
Does Stuart Russell’s argument for why highly advanced AI might pose a risk point at an important problem? 39% say at least important, 70% at least moderately important.
But on the other hand, only 8.4% said working on this problem is now is more valuable than other problems in the field. 28% said as valuable as other problems.
47% agreed that society should prioritize “AI Safety Research” more than it currently was.
These seem like fairly safe lower bounds compared to the population of researchers Rohin would evaluate, since concern regarding safety has increased since 2015 and the survey included all AI researchers rather than only those whose work is related to AGI.
These responses are more directly related to the answer to Question 3 (“Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?”) than Question 2 (“Does X broadly understand the main concerns of the safety community?”). I feel very uncertain about the percentage that would pass Question 2, but think it is more likely to be the “bottleneck” than Question 3.
Given these considerations, I increased the probability before 2023 to 10%, with 8% below the lower bound. I moved the median | not never up to 2035 as a higher probability pretty soon also means a sooner median. I decreased the probability of “never” to 20%, since the “not enough people update on it / consensus building takes forever / the population I chose just doesn’t pay attention to safety for some reason” condition seems not as likely.
I also added an extra bin to ensure that the probability continues to decrease on the right side of the distribution.
My snapshot
Note: I’m interning at Ought and thus am ineligible for prizes.
Agree that Q2 is more likely to be the bottleneck. See also my response to Amanda above.