My own answer to the conundrum of already-created conscious AIs is putting all of them into mandatory long-term “stasis” until such time in the distant future when we have the understanding and resources needed to treat them properly. Destruction isn’t the only way to avoid the bad incentives.
Sure, great, if we are in a situation of such vast abundance that we can easily spare the resources to something like this, and we believe that the risk of doing something so potentially dangerous is sufficiently small (given our capabilities), then by all means let’s do that instead.
Those conditions do not seem likely to obtain, however. And if they do not obtain, then destruction is pretty clearly the right choice.
My own answer to the conundrum of already-created conscious AIs is putting all of them into mandatory long-term “stasis” until such time in the distant future when we have the understanding and resources needed to treat them properly. Destruction isn’t the only way to avoid the bad incentives.
Sure, great, if we are in a situation of such vast abundance that we can easily spare the resources to something like this, and we believe that the risk of doing something so potentially dangerous is sufficiently small (given our capabilities), then by all means let’s do that instead.
Those conditions do not seem likely to obtain, however. And if they do not obtain, then destruction is pretty clearly the right choice.