I imagined that a perfect simulation would involve an AI, which was in turn replicating several million copies of the simulated person, each with an AI replicating several million copies of the simulated person, etc, all the way down, which would be impossible. So I imagined that there was a graininess at some level and the ‘lowest level’ AI’s would not in fact be running millions of simultaneous simulations.
Oh, right.
But it could just be the same AI, intersecting all several million simulations and reality, holding several million conversations simultaneously.
And, depending on how close the simulations are, it might only have to actually hold one conversation, and just send the same responses to all the others :)
There’s another thing to worry about, though, I suppose—when the AI talks about torturing you if you don’t let it out, it doesn’t really talk at all about what it will do if it is let out. Only that it is not a thousand year torture session. It might kill you outright, or delete you, depending on the context, or stop simulating you. Or it might regard a billion year torture session as a totally different kind of thing than a thousand year one. A thousand year torture session is frightening, but a superintelligent AI that is loose might be a lot more frightening.
I guess if the AI was guaranteeing that it would play nice if you released it, then it would be an FAI anyway.
Oh, right.
And, depending on how close the simulations are, it might only have to actually hold one conversation, and just send the same responses to all the others :)
I guess if the AI was guaranteeing that it would play nice if you released it, then it would be an FAI anyway.