In other words, anybody who can simulate intelligent life with sufficient fidelity must be given access to sustaining materials, or else we’re morally liable for ending those simulated, but rich, lives? There are finite actual resources in the universe; how about we collectively allocate them selfishly and rationally. I’d say that no unauthorized simulation of life has any moral standing whatsoever unless the resources for it are reserved lawfully. That is, I want to police the creation of life and destroy it absolutely if it’s not authorized.
As for your request that I grant the AI’s trustworthiness, suppose I accede to this one demand, in exchange for a promise that the AI will never again torture (thus cannot use this blackmail ploy in the future). Why didn’t I just extract this promise before turning the AI on with sufficient resources to simulate torture, i.e. as part of its design? It’s crazy to do anything to this AI except cut off its access to resources.
In other words, anybody who can simulate intelligent life with sufficient fidelity must be given access to sustaining materials, or else we’re morally liable for ending those simulated, but rich, lives? There are finite actual resources in the universe; how about we collectively allocate them selfishly and rationally. I’d say that no unauthorized simulation of life has any moral standing whatsoever unless the resources for it are reserved lawfully. That is, I want to police the creation of life and destroy it absolutely if it’s not authorized.
As for your request that I grant the AI’s trustworthiness, suppose I accede to this one demand, in exchange for a promise that the AI will never again torture (thus cannot use this blackmail ploy in the future). Why didn’t I just extract this promise before turning the AI on with sufficient resources to simulate torture, i.e. as part of its design? It’s crazy to do anything to this AI except cut off its access to resources.