Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...we have people who say we’ve got to solve problems of poverty and famine and so forth, and we shouldn’t be working on things like artificial intelligence… [But I think] we should have a certain number of people worrying about… whether artificial intelligence will be a huge disaster some day or be one of the best events in the universe...
...You might be the only one who can help with the disaster that’s going to happen [decades from now], and if you don’t prepare yourself, and instead just go off into some social welfare project right now, who will do it then? …Yes, I feel that there’s a great enterprise going on which is making the world of the future all right.
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
I don’t think that’s a legitimate “Unfortunately”. If you’re not inspired and an approach doesn’t pop into your head, throwing money at the problem until you get some grad students who couldn’t get a postdoc elsewhere is not necessarily going to be productive, can indeed be counterproductive, and Minsky would legitimately know that.
Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
I don’t think that’s a legitimate “Unfortunately”. If you’re not inspired and an approach doesn’t pop into your head, throwing money at the problem until you get some grad students who couldn’t get a postdoc elsewhere is not necessarily going to be productive, can indeed be counterproductive, and Minsky would legitimately know that.
Okay, then: “Unfortunately, Minsky was not then inspired, by a reasonable approach to the problem, to create the MIT AI Safety Lab.”