There are many types of academics—does your argument extend to french literature experts?
Clearly, if there is a goal behind the technical agenda, changing the technical agenda to appeal to certain groups detracts from that goal. You could argue that enlisting the help of mathematicians and logicians is so important it justifies changing the agenda … but I doubt there is much historical support for such a strategy.
I suspect part of the problem is that the types of researchers/academics which could most help (machine learning, statistics, comp sci types) are far too valuable to industry and thus are too expensive for non-profits such as MIRI.
There are many types of academics—does your argument extend to french literature experts?
Well, if MIRI happened to know of technical problems they thought were relevant for AI safety and which they thought French literature experts could usefully contribute to, sure.
I’m not suggesting that they would have taken otherwise uninteresting problems and written those up simply because they might be of interest to mathematicians. Rather my understanding is that they had a set of problems that seemed about equally important, and then from that set, used “which ones could we best recruit outsiders to help with” as an additional criteria. (Though I wasn’t there, so anything I say about this is at best a combination of hearsay and informed speculation.)
There are many types of academics—does your argument extend to french literature experts?
Clearly, if there is a goal behind the technical agenda, changing the technical agenda to appeal to certain groups detracts from that goal. You could argue that enlisting the help of mathematicians and logicians is so important it justifies changing the agenda … but I doubt there is much historical support for such a strategy.
I suspect part of the problem is that the types of researchers/academics which could most help (machine learning, statistics, comp sci types) are far too valuable to industry and thus are too expensive for non-profits such as MIRI.
Well, if MIRI happened to know of technical problems they thought were relevant for AI safety and which they thought French literature experts could usefully contribute to, sure.
I’m not suggesting that they would have taken otherwise uninteresting problems and written those up simply because they might be of interest to mathematicians. Rather my understanding is that they had a set of problems that seemed about equally important, and then from that set, used “which ones could we best recruit outsiders to help with” as an additional criteria. (Though I wasn’t there, so anything I say about this is at best a combination of hearsay and informed speculation.)