Johnicolas is suggesting that if you create a simulated universe in the hope that it will provide ill-defined benefits for mankind (e.g. a cure for cancer), you have to exclude the possibility that your AIs will make a simulated universe inside the simulation in order to solve the same problem. Because if they do, you’re no closer to an answer.
Johnicolas is suggesting that if you create a simulated universe in the hope that it will provide ill-defined benefits for mankind (e.g. a cure for cancer), you have to exclude the possibility that your AIs will make a simulated universe inside the simulation in order to solve the same problem. Because if they do, you’re no closer to an answer.
Ah my bad—I misread him.