Good to know and I appreciate you sharing that exchange.
You are correct that such a thing is not in there… because (if you’re curious) I thought, strategically, it was better to argue for what is desirable (safe AI innovation) than to argue for a negative (stop it all). Of course, if one makes the requirements for safe AI innovation strong enough, it may result in a slowing or restricting of developments.
On the other (IMO bigger) hand, the fewer people talk about the thing explicitly, the less likely it is to be included in the Overton windows and less likely it is to seem like a reasonable/socialy acceptable goal to aim for directly.
I don’t think the case for safe nuclear/biotechnology would be less persuasive if paired with “let’s just get rid of nuclear weapons/bioweapons/gain of function research”.
Good to know and I appreciate you sharing that exchange.
You are correct that such a thing is not in there… because (if you’re curious) I thought, strategically, it was better to argue for what is desirable (safe AI innovation) than to argue for a negative (stop it all). Of course, if one makes the requirements for safe AI innovation strong enough, it may result in a slowing or restricting of developments.
On the one hand, yeah, it might.
On the other (IMO bigger) hand, the fewer people talk about the thing explicitly, the less likely it is to be included in the Overton windows and less likely it is to seem like a reasonable/socialy acceptable goal to aim for directly.
I don’t think the case for safe nuclear/biotechnology would be less persuasive if paired with “let’s just get rid of nuclear weapons/bioweapons/gain of function research”.