Related question: are you in favor of making AGI open weights? By AGI, I mean AIs which effectively operate autonomously and can acquire automously acquire money/power. This includes AIs capable enough to automate whole fields of R&D (but not much more capable than this). I think the case for this being useful on your views feels much stronger than the case for control preventing warning shots. After all, you seemingly mostly thought control was bad due to the chance it would prevent escape or incidents of strange (and not that strategic) behavior. Naively, I think that open weights AGI is strictly better from your perspective than having the AGI escape.
I think there is a coherant perspective that is in favor of open weights AGI due to this causing havok/harm which then results in the world handling AI more reasonably. And there are other benefits in terms of transparency and alignment research. But, the expected number of lives lost feels very high to me (which would probably also make AI takeover more likely due to making humanity weaker) and having AIs capable of automating AI R&D be open weights means that slowing down is unlikely to be viable, reducing the potential value of society taking the situation more seriously.
You could have the view that open weights AGI is too costly on takeover risk and escape is bad, but we’ll hopefully have some pre-AGI AIs which do strange misaligned behaviors that don’t really get them much/any influence/power. If this is the view, then it really feels to me like preventing escape/rogue internal deployment is pretty useful.
Related question: are you in favor of making AGI open weights? By AGI, I mean AIs which effectively operate autonomously and can acquire automously acquire money/power. This includes AIs capable enough to automate whole fields of R&D (but not much more capable than this). I think the case for this being useful on your views feels much stronger than the case for control preventing warning shots. After all, you seemingly mostly thought control was bad due to the chance it would prevent escape or incidents of strange (and not that strategic) behavior. Naively, I think that open weights AGI is strictly better from your perspective than having the AGI escape.
I think there is a coherant perspective that is in favor of open weights AGI due to this causing havok/harm which then results in the world handling AI more reasonably. And there are other benefits in terms of transparency and alignment research. But, the expected number of lives lost feels very high to me (which would probably also make AI takeover more likely due to making humanity weaker) and having AIs capable of automating AI R&D be open weights means that slowing down is unlikely to be viable, reducing the potential value of society taking the situation more seriously.
You could have the view that open weights AGI is too costly on takeover risk and escape is bad, but we’ll hopefully have some pre-AGI AIs which do strange misaligned behaviors that don’t really get them much/any influence/power. If this is the view, then it really feels to me like preventing escape/rogue internal deployment is pretty useful.