What I would put as question 1 (with three parts):
(a) What does the (concept/phrase) of “human-level machine intelligence” mean to you? (b) What forms of machine intelligence are you most optimistic about? (c) What forms do you think could be the most dangerous?
Rationale: it seems to me that the most useful part of Nilsson’s response was his alternate definition of human-level intelligence. Moving AI experts from the ridiculous mode of “what probability do you place on Terminator occurring?” to the serious mode of “what could go wrong with a potential design?” both signals your seriousness as a thinker and primes them to take AI risks seriously, since they came up with the doomsday scenario. It also seems like getting a sense of what direction AI experts think AI will take is useful: if experts are optimistic about machine intelligence hardware design, then FOOMing is more likely. (It might even be useful to ask about areas they’re pessimistic about, since that’s a different question than danger, but four questions for the first question seems like a bit much.)
Drawback: what you’re interested in is cross-domain optimization competence. If people give you numbers based on when machine intelligence will be able to pass a Turing test, those numbers will be mostly useless. Even the numbers Nilsson gave for his ‘employable AI’ are difficult to compare to numbers other people are giving. But it seems to me that knowing better what they mean is more important than easy comparisons.
Overall, I feel a bit better about lukeprog’s rewrite than I do about the original. I do think at least one question about AI risk countermeasures should be preserved- probably something like this:
Q4. What is the ideal level of awareness and funding for AI risk reduction?
Possibly with a clarification that they can either give a dollar number or just compare it to some other cause (like a particular variety of cancer, other existential risks, etc.).
What I would put as question 1 (with three parts):
(a) What does the (concept/phrase) of “human-level machine intelligence” mean to you? (b) What forms of machine intelligence are you most optimistic about? (c) What forms do you think could be the most dangerous?
Rationale: it seems to me that the most useful part of Nilsson’s response was his alternate definition of human-level intelligence. Moving AI experts from the ridiculous mode of “what probability do you place on Terminator occurring?” to the serious mode of “what could go wrong with a potential design?” both signals your seriousness as a thinker and primes them to take AI risks seriously, since they came up with the doomsday scenario. It also seems like getting a sense of what direction AI experts think AI will take is useful: if experts are optimistic about machine intelligence hardware design, then FOOMing is more likely. (It might even be useful to ask about areas they’re pessimistic about, since that’s a different question than danger, but four questions for the first question seems like a bit much.)
Drawback: what you’re interested in is cross-domain optimization competence. If people give you numbers based on when machine intelligence will be able to pass a Turing test, those numbers will be mostly useless. Even the numbers Nilsson gave for his ‘employable AI’ are difficult to compare to numbers other people are giving. But it seems to me that knowing better what they mean is more important than easy comparisons.
Overall, I feel a bit better about lukeprog’s rewrite than I do about the original. I do think at least one question about AI risk countermeasures should be preserved- probably something like this:
Q4. What is the ideal level of awareness and funding for AI risk reduction?
Possibly with a clarification that they can either give a dollar number or just compare it to some other cause (like a particular variety of cancer, other existential risks, etc.).