I really liked “Uncontrollable”, as I felt it did a much better job explaining AI X-risk in layperson-accessible terms than either Bostrom, Russell, or Christian. (I even recommended it to a (catholic) priest visiting my family, who, when I said that I worked in ~”AI regulation/governance”, revealed his worries about AI taking over the world.[1])
The main shortcoming of “Uncontrollable” was that it (AFAIR) didn’t engage with the possibility of a coordinated pause/slowdown/ban.
Good to know and I appreciate you sharing that exchange.
You are correct that such a thing is not in there… because (if you’re curious) I thought, strategically, it was better to argue for what is desirable (safe AI innovation) than to argue for a negative (stop it all). Of course, if one makes the requirements for safe AI innovation strong enough, it may result in a slowing or restricting of developments.
On the other (IMO bigger) hand, the fewer people talk about the thing explicitly, the less likely it is to be included in the Overton windows and less likely it is to seem like a reasonable/socialy acceptable goal to aim for directly.
I don’t think the case for safe nuclear/biotechnology would be less persuasive if paired with “let’s just get rid of nuclear weapons/bioweapons/gain of function research”.
I really liked “Uncontrollable”, as I felt it did a much better job explaining AI X-risk in layperson-accessible terms than either Bostrom, Russell, or Christian. (I even recommended it to a (catholic) priest visiting my family, who, when I said that I worked in ~”AI regulation/governance”, revealed his worries about AI taking over the world.[1])
The main shortcoming of “Uncontrollable” was that it (AFAIR) didn’t engage with the possibility of a coordinated pause/slowdown/ban.
Priest: “Oh, interesting, …. . Tell me, don’t you worry that this AI will take over the world?”
Me: ”… Yes, I do worry.”
Priest: “Yeah, I’ve been thinking about it and it’s terrifying.”
Me: “I agree, it’s terrifying.”
Priest: “I’ve been looking for some book about this but couldn’t find anything sensible.”
Me: “Well, I can recommend a book to you.”
Good to know and I appreciate you sharing that exchange.
You are correct that such a thing is not in there… because (if you’re curious) I thought, strategically, it was better to argue for what is desirable (safe AI innovation) than to argue for a negative (stop it all). Of course, if one makes the requirements for safe AI innovation strong enough, it may result in a slowing or restricting of developments.
On the one hand, yeah, it might.
On the other (IMO bigger) hand, the fewer people talk about the thing explicitly, the less likely it is to be included in the Overton windows and less likely it is to seem like a reasonable/socialy acceptable goal to aim for directly.
I don’t think the case for safe nuclear/biotechnology would be less persuasive if paired with “let’s just get rid of nuclear weapons/bioweapons/gain of function research”.