The signaling uses of Q8 seem like a bad idea to me, although it seems a worthwhile thing to ask for Steve Rayhawk’s reasons. If someone is all prepared to be patronizing and dismissive, going “Are you familiar with Goedel machines, AIXI, etc.?” may help if they know what those things are, but seems likely to do more harm if they don’t know them, or regard them negatively—and those people are, by hypothesis, the ones who are going to be patronizing and dismissive.
In fact, I’d prefer it if Q8 started out with the less-shibbolethy “How much have you read about, or used the concepts of...” or something like that, which replaces a dichotomy with a continuum.
In fact, I’d prefer it if Q8 started out with the less-shibbolethy “How much have you read about, or used the concepts of...” or something like that, which replaces a dichotomy with a continuum.
Yeah… I wanted to make the suggested question less loaded, but it would have required more words, and I was unthinkingly preoccupied with worry about a limit on the permitted complexity of a single-sentence question. Maybe I should have split the question across more sentences.
The signaling uses of Q8 seem like a bad idea to me, although it seems a worthwhile thing to ask for Steve Rayhawk’s reasons.
My reasons for suggesting Q8 were mostly:
First, I wanted to make it easier to narrow down hypotheses about the relationship between respondents’ opinions about AI risk and their awareness of progress toward formal, machine-representable concepts of optimal AI design (also including, I guess, progress toward practically efficient mechanized application of those concepts, as in Schmidhuber’s Speed Prior and AIXI-tl).
Second, I was imagining that many respondents would be AI practitioners who thought mostly in terms of architectures with a machine-learning flavor. Those architectures usually have a very specific and limited structure in their hypothesis space or policy space by construction, such that it would be clearly silly to imagine a system with such an architecture self-representing or self-improving. These researchers might have a conceptual myopia by which they imagine “progress in AI” to mean only “creation of more refined machine-learning-style architectures”, of a sort which of course wouldn’t lead towards passing a threshold of capability of self-improvement anytime soon. I wanted to put in something of a conceptual speed bump to that kind of thinking, to reduce unthinking dismissiveness in the answers, and counter part of the polarizing/consistency effects that merely receiving and thinking about answering the survey might have on the recipients’ opinions. (Of course, if this had been a survey which were meant to be scientific and formally reportable, it would be desirable for the presence of such a potentially leading question to be an experimentally controlled variable.)
With those reasons on the table, someone else might be able to come up with a question that fulfills them better. I also agree with paulfchristiano’s comment.
may help if they know what those things are, but seems likely to do more harm if they don’t know them, or regard them negatively—and those people are, by hypothesis, the ones who are going to be patronizing and dismissive.
I had considered that. Here’s my assumptions:
The people that do and don’t know those things are more likely to elevate their responses from a deferential context to a professional one.
The people that regard those things negatively or want to be patronizing will produce responses that aren’t meaningful.
The people that do know those things and regard them positively will be impressed and thereby more generous in the quality of their responses.
If true, I think the first assumption is especially important. It’s the difference between answering a journalist’s question and answering that same question at a professional conference. In the former case, I would have to consider the variety of ways they’re likely to misunderstand or skew my answer, and, really, I just want to give them the answer that produces the belief I want. E.g., don’t freak about A.I. because you know nothing about it and we do. We’re not worried, really. Now, look at this cute dancing robot and give us more funding.
Edit: I forgot to add that I agree with you on changing the wording of Q8. Although, I don’t think it makes it any less shibbolethy, just less obviously a shibboleth. Sneaky, I like it.
The signaling uses of Q8 seem like a bad idea to me, although it seems a worthwhile thing to ask for Steve Rayhawk’s reasons. If someone is all prepared to be patronizing and dismissive, going “Are you familiar with Goedel machines, AIXI, etc.?” may help if they know what those things are, but seems likely to do more harm if they don’t know them, or regard them negatively—and those people are, by hypothesis, the ones who are going to be patronizing and dismissive.
In fact, I’d prefer it if Q8 started out with the less-shibbolethy “How much have you read about, or used the concepts of...” or something like that, which replaces a dichotomy with a continuum.
Yeah… I wanted to make the suggested question less loaded, but it would have required more words, and I was unthinkingly preoccupied with worry about a limit on the permitted complexity of a single-sentence question. Maybe I should have split the question across more sentences.
My reasons for suggesting Q8 were mostly:
First, I wanted to make it easier to narrow down hypotheses about the relationship between respondents’ opinions about AI risk and their awareness of progress toward formal, machine-representable concepts of optimal AI design (also including, I guess, progress toward practically efficient mechanized application of those concepts, as in Schmidhuber’s Speed Prior and AIXI-tl).
Second, I was imagining that many respondents would be AI practitioners who thought mostly in terms of architectures with a machine-learning flavor. Those architectures usually have a very specific and limited structure in their hypothesis space or policy space by construction, such that it would be clearly silly to imagine a system with such an architecture self-representing or self-improving. These researchers might have a conceptual myopia by which they imagine “progress in AI” to mean only “creation of more refined machine-learning-style architectures”, of a sort which of course wouldn’t lead towards passing a threshold of capability of self-improvement anytime soon. I wanted to put in something of a conceptual speed bump to that kind of thinking, to reduce unthinking dismissiveness in the answers, and counter part of the polarizing/consistency effects that merely receiving and thinking about answering the survey might have on the recipients’ opinions. (Of course, if this had been a survey which were meant to be scientific and formally reportable, it would be desirable for the presence of such a potentially leading question to be an experimentally controlled variable.)
With those reasons on the table, someone else might be able to come up with a question that fulfills them better. I also agree with paulfchristiano’s comment.
I had considered that. Here’s my assumptions:
The people that do and don’t know those things are more likely to elevate their responses from a deferential context to a professional one.
The people that regard those things negatively or want to be patronizing will produce responses that aren’t meaningful.
The people that do know those things and regard them positively will be impressed and thereby more generous in the quality of their responses.
If true, I think the first assumption is especially important. It’s the difference between answering a journalist’s question and answering that same question at a professional conference. In the former case, I would have to consider the variety of ways they’re likely to misunderstand or skew my answer, and, really, I just want to give them the answer that produces the belief I want. E.g., don’t freak about A.I. because you know nothing about it and we do. We’re not worried, really. Now, look at this cute dancing robot and give us more funding.
Edit: I forgot to add that I agree with you on changing the wording of Q8. Although, I don’t think it makes it any less shibbolethy, just less obviously a shibboleth. Sneaky, I like it.