I find this far more convincing than any variant of the simulation argument I’ve heard before. They’ve lacked a reason that someone would want to simulate a reality like ours. I haven’t heard a reason for simulating ancestors that’s either strong enough to think an AGI or its biological creators would want to spend the resources, or explains the massive apparent suffering happening in this sim.
This is a reason. And if it’s done in a computationally efficient manner, possibly needing little more compute than running the brains involved directly in the creation of AGI, this sounds all too plausible—perhaps even for an aligned AGI, since most of the suffering can be faked, since the people directly affecting AGI are arguably almost all leading net-positive-happiness lives. If what you care about is decisions, you can just simulate in enough detail to capture plausible decision-making processes, which could be quite efficient. See my other comment for more on the efficiency argument.
I am left with a new concern: being shut down even if we succeed at alignment. This will be added to my many concerns about how easily we might get it wrong and experience extinction, or worse, suffering-then-extinction Fortunately, my psyche thus far seems to carry these concerns fairly lightly. Which is probably a coincidence, right?
I find some of the particular arguments’ premises implausible, but I don’t think they hurt the core plausibility argument. I’ve never found it very plausible that we’re in a simulation. Now I do.
I find this far more convincing than any variant of the simulation argument I’ve heard before. They’ve lacked a reason that someone would want to simulate a reality like ours. I haven’t heard a reason for simulating ancestors that’s either strong enough to think an AGI or its biological creators would want to spend the resources, or explains the massive apparent suffering happening in this sim.
This is a reason. And if it’s done in a computationally efficient manner, possibly needing little more compute than running the brains involved directly in the creation of AGI, this sounds all too plausible—perhaps even for an aligned AGI, since most of the suffering can be faked, since the people directly affecting AGI are arguably almost all leading net-positive-happiness lives. If what you care about is decisions, you can just simulate in enough detail to capture plausible decision-making processes, which could be quite efficient. See my other comment for more on the efficiency argument.
I am left with a new concern: being shut down even if we succeed at alignment. This will be added to my many concerns about how easily we might get it wrong and experience extinction, or worse, suffering-then-extinction Fortunately, my psyche thus far seems to carry these concerns fairly lightly. Which is probably a coincidence, right?
I find some of the particular arguments’ premises implausible, but I don’t think they hurt the core plausibility argument. I’ve never found it very plausible that we’re in a simulation. Now I do.
What interesting ideas can we suggest to the Paperclipper simulator so that it won’t turn us off?
One simple idea is a “pause AI” feature. If we pause the AI for a finite (but not indefinite) amount of time, the whole simulation will have to wait.