My preferred rewrite, without spending too much time on it:
Q1a: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached. [reason: this matches question #1 of FHI’s machine intelligence survey.]
Q1b: Once we build AIs that are as skilled at technology design and general reasoning as humans are, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better than humans at technology design and general engineering?
Q2a: Do you ever expect AIs to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?
Q2b: [delete to make questions list less dauntingly long, and increase response rate]
Q2c: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at technology design and general reasoning to use its capacities to self-modify its way up to vastly superhuman general capabilities within a matter of hours/days/< 5 years? (‘Self modification’ may include the first AI creating improved child AIs, which create further-improved child AIs, etc.)
Q3a: How important is it to figure out how to make superhuman AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at technology design and general reasoning to undergo radical self-modification?
Q3b: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)?
Q3c: [delete to reduce length of questions list]
Q4: [delete to reduce length of questions list]
Q5: [delete to reduce length of questions list; AI experts are unlikely to be experts on other x-risks]
Q6: [delete to reduce length of questions list; I haven’t seen, and don’t anticipate, useful answers here]
I agree with deleting Q5 and Q6 because not only would I not expect useful responses but also because it may come off as “extremist” if any respondents are not already familiar with UFAI concepts (or if they are familiar and overtly dismissive of them).
My preferred rewrite, without spending too much time on it:
Q1a: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached. [reason: this matches question #1 of FHI’s machine intelligence survey.]
Q1b: Once we build AIs that are as skilled at technology design and general reasoning as humans are, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better than humans at technology design and general engineering?
Q2a: Do you ever expect AIs to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?
Q2b: [delete to make questions list less dauntingly long, and increase response rate]
Q2c: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at technology design and general reasoning to use its capacities to self-modify its way up to vastly superhuman general capabilities within a matter of hours/days/< 5 years? (‘Self modification’ may include the first AI creating improved child AIs, which create further-improved child AIs, etc.)
Q3a: How important is it to figure out how to make superhuman AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at technology design and general reasoning to undergo radical self-modification?
Q3b: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)?
Q3c: [delete to reduce length of questions list]
Q4: [delete to reduce length of questions list]
Q5: [delete to reduce length of questions list; AI experts are unlikely to be experts on other x-risks]
Q6: [delete to reduce length of questions list; I haven’t seen, and don’t anticipate, useful answers here]
Q7: [delete to reduce length of questions list]
I endorse the “question deletion” idea.
Are these two expressions supposed (or assumed) to be equivalent?
I updated the original post. Maybe we could agree on those questions. Be back tomorrow.
I stand by my preferred rewrites above, but of course it’s up to you.
I agree with deleting Q5 and Q6 because not only would I not expect useful responses but also because it may come off as “extremist” if any respondents are not already familiar with UFAI concepts (or if they are familiar and overtly dismissive of them).