That sounds like an odd position to me. IMO, getting as many academics from other fields as possible working on the problems is essential if one wants to make maximal progress on them.
The academic field which is most conspicuously missing is artificial intelligence. I agree with Jacob that it is and should be concerning that the machine intelligence research institute has adopted a technical agenda which is non-inclusive of machine intelligence researchers.
I agree with Jacob that it is and should be concerning
That depends on whether you believe that machine intelligence researchers are the people who are currently the most likely to produce valuable progress on the relevant research questions.
One can reasonably disagree on MIRI’s current choices about their research program, but I certainly don’t think that their choices are concerning in the sense of suggesting irrationality on their part. (Rather the choices only suggest differing empirical beliefs which are arguable, but still well within the range of non-insane beliefs.)
On the contrary, my core thesis is that AI risk advocates are being irrational. It’s implied in the title of the post ;)
Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically—I’m willing to call a spade a spade: that is most certainly irrational.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.
There are many types of academics—does your argument extend to french literature experts?
Clearly, if there is a goal behind the technical agenda, changing the technical agenda to appeal to certain groups detracts from that goal. You could argue that enlisting the help of mathematicians and logicians is so important it justifies changing the agenda … but I doubt there is much historical support for such a strategy.
I suspect part of the problem is that the types of researchers/academics which could most help (machine learning, statistics, comp sci types) are far too valuable to industry and thus are too expensive for non-profits such as MIRI.
There are many types of academics—does your argument extend to french literature experts?
Well, if MIRI happened to know of technical problems they thought were relevant for AI safety and which they thought French literature experts could usefully contribute to, sure.
I’m not suggesting that they would have taken otherwise uninteresting problems and written those up simply because they might be of interest to mathematicians. Rather my understanding is that they had a set of problems that seemed about equally important, and then from that set, used “which ones could we best recruit outsiders to help with” as an additional criteria. (Though I wasn’t there, so anything I say about this is at best a combination of hearsay and informed speculation.)
That sounds like an odd position to me. IMO, getting as many academics from other fields as possible working on the problems is essential if one wants to make maximal progress on them.
The academic field which is most conspicuously missing is artificial intelligence. I agree with Jacob that it is and should be concerning that the machine intelligence research institute has adopted a technical agenda which is non-inclusive of machine intelligence researchers.
That depends on whether you believe that machine intelligence researchers are the people who are currently the most likely to produce valuable progress on the relevant research questions.
One can reasonably disagree on MIRI’s current choices about their research program, but I certainly don’t think that their choices are concerning in the sense of suggesting irrationality on their part. (Rather the choices only suggest differing empirical beliefs which are arguable, but still well within the range of non-insane beliefs.)
On the contrary, my core thesis is that AI risk advocates are being irrational. It’s implied in the title of the post ;)
Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically—I’m willing to call a spade a spade: that is most certainly irrational.
That’s a good summary of your post.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.
There are many types of academics—does your argument extend to french literature experts?
Clearly, if there is a goal behind the technical agenda, changing the technical agenda to appeal to certain groups detracts from that goal. You could argue that enlisting the help of mathematicians and logicians is so important it justifies changing the agenda … but I doubt there is much historical support for such a strategy.
I suspect part of the problem is that the types of researchers/academics which could most help (machine learning, statistics, comp sci types) are far too valuable to industry and thus are too expensive for non-profits such as MIRI.
Well, if MIRI happened to know of technical problems they thought were relevant for AI safety and which they thought French literature experts could usefully contribute to, sure.
I’m not suggesting that they would have taken otherwise uninteresting problems and written those up simply because they might be of interest to mathematicians. Rather my understanding is that they had a set of problems that seemed about equally important, and then from that set, used “which ones could we best recruit outsiders to help with” as an additional criteria. (Though I wasn’t there, so anything I say about this is at best a combination of hearsay and informed speculation.)