Furthermore, why not just resurrect all these people into worlds with no suffering?
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
Yup, I agree.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
Yup, I agree.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.