I’ll have a bash at these questions—for reference purposes. Others may want to too.
1) What probability do you assign to the possibility of us being wiped out by badly done AI?
All humans? Less than 1%. Some due to faith in engineers. Some due to thinking that preserving some humans has substantial Universal Instrumental Value.
2) What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
Less than 1%.
3) Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
We should put some energy into this area—though the world won’t end if we don’t. Machine intelligence is an enormous and important task, so the more foresight the better. I don’t like this question much—the bit about being “provably friendly” frames the actually issues in this area pretty poorly.
4) What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?
That’s mostly public information. Opinions range from blazee lack of concern through indifference (usually due to it being too far off), to powerful paranoia (from the END OF THE WORLD merchants). I’m not sure there is such a thing as an ideal level of paranoia—a spread probably provides some healthy diversity. Plus optimal paranoia levels are value-dependent.
5) How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
Machine intelligence and nanotechnology will probably spiral together—due to G-N-R “convergence”. However machine intelligence will probably lead to nanotechnology—more than the other way around. So, these risks are pretty linked together. However: machine intelligence is generally the biggest issue we face—what should get the most attention, and what could potentially cause the biggest problems if it does not go well.
I’ll have a bash at these questions—for reference purposes. Others may want to too.
All humans? Less than 1%. Some due to faith in engineers. Some due to thinking that preserving some humans has substantial Universal Instrumental Value.
Less than 1%.
We should put some energy into this area—though the world won’t end if we don’t. Machine intelligence is an enormous and important task, so the more foresight the better. I don’t like this question much—the bit about being “provably friendly” frames the actually issues in this area pretty poorly.
That’s mostly public information. Opinions range from blazee lack of concern through indifference (usually due to it being too far off), to powerful paranoia (from the END OF THE WORLD merchants). I’m not sure there is such a thing as an ideal level of paranoia—a spread probably provides some healthy diversity. Plus optimal paranoia levels are value-dependent.
Machine intelligence and nanotechnology will probably spiral together—due to G-N-R “convergence”. However machine intelligence will probably lead to nanotechnology—more than the other way around. So, these risks are pretty linked together. However: machine intelligence is generally the biggest issue we face—what should get the most attention, and what could potentially cause the biggest problems if it does not go well.