The approach taken in some of these questions, particularly Q3, seems unlikely to yield helpful responses and likely to make you seem uninformed. It would probably be better to ask directly about one or a few relevant inputs:
To what extent will building artificial intelligence rely on particular mathematical or engineering skills, rather than the more varied human skills which we are trying to emulate? For example, would an AI with human-level skill at mathematics and programming be able to design a new AI with sophisticated social skills, or does that require an AI which already possesses sophisticated social skills? To what extent does human engineering and mathematical ability rely on many varied aspects of human cognition, such as social interaction and embodiment?
Once our understanding of AI and our hardware capabilities are sufficiently sophisticated to build an AI which is as good as humans at engineering or programming, how much more difficult will it be to build an AI which is substantially better than humans at mathematics or engineering?
Do you ever expect automated systems to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress? (I would also suggest using some question of this form instead of talking about “human-level” AI) If so, when? In your view should we be considering the possible effects of such a transition?
More generally, while I think that getting a better sense of AI researchers’ views does have value, I am afraid that the primary effect of presenting these questions in this way may be to make the marginal researcher less receptive to serious arguments or discussions about AI risk. In light of this I would recommend condensing questions 2, 4, 5, 6 and presenting them in a way that seems less loaded, if you are set on approaching a significant fraction of all AI researchers.
(Though I would also suggest applying a little more deliberation, particularly in revising or varying questions / explanations / background between rounds, if you are going to ask more people.)
Also, it may be important to clarify what is meant by “intelligence”, as many researchers seem to be confused because they’re not sure what’s meant by “intelligence.”
The approach taken in some of these questions, particularly Q3, seems unlikely to yield helpful responses and likely to make you seem uninformed. It would probably be better to ask directly about one or a few relevant inputs:
To what extent will building artificial intelligence rely on particular mathematical or engineering skills, rather than the more varied human skills which we are trying to emulate? For example, would an AI with human-level skill at mathematics and programming be able to design a new AI with sophisticated social skills, or does that require an AI which already possesses sophisticated social skills? To what extent does human engineering and mathematical ability rely on many varied aspects of human cognition, such as social interaction and embodiment?
Once our understanding of AI and our hardware capabilities are sufficiently sophisticated to build an AI which is as good as humans at engineering or programming, how much more difficult will it be to build an AI which is substantially better than humans at mathematics or engineering?
Do you ever expect automated systems to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress? (I would also suggest using some question of this form instead of talking about “human-level” AI) If so, when? In your view should we be considering the possible effects of such a transition?
More generally, while I think that getting a better sense of AI researchers’ views does have value, I am afraid that the primary effect of presenting these questions in this way may be to make the marginal researcher less receptive to serious arguments or discussions about AI risk. In light of this I would recommend condensing questions 2, 4, 5, 6 and presenting them in a way that seems less loaded, if you are set on approaching a significant fraction of all AI researchers.
(Though I would also suggest applying a little more deliberation, particularly in revising or varying questions / explanations / background between rounds, if you are going to ask more people.)
Yes.
Also, it may be important to clarify what is meant by “intelligence”, as many researchers seem to be confused because they’re not sure what’s meant by “intelligence.”