Survey: Risks from AI

Related to: lesswrong.com/​lw/​fk/​survey_results/​

I am currently emailing experts in order to raise and estimate the academic awareness and perception of risks from AI and ask them for permission to publish and discuss their responses. User:Thomas suggested to also ask you, everyone who is reading lesswrong.com, and I thought this was a great idea. If I ask experts to publicly answer questions, to publish and discuss them here on LW, I think it is only fair to do the same.

Answering the questions below will help the SIAI and everyone interested to mitigate risks from AI to estimate the effectiveness with which the risks are communicated.

Questions:

  1. Assuming no global catastrophe halts progress, by what year would you assign a 10%/​50%/​90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.

  2. What probability do you assign to the possibility of a negative/​extremely negative Singularity as a result of badly done AI?

  3. What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/​days/​< 5 years?

  4. Does friendly AI research, as being conducted by the SIAI, currently require less/​no more/​little more/​much more/​vastly more support?

  5. Do risks from AI outweigh other existential risks, e.g. advanced nanotechnology? Please answer with yes/​no/​don’t know.

  6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

Note: Please do not downvote comments that are solely answering the above questions.