One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn’t increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn’t you notice that? Kind of a long-duration Turing test.
Simulation argument case 3 obviously.
One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn’t increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
That’s exactly what an agent of the simulator would say.
Cue the rooftop chase.
But just like HPMOR’s hat, the conscious being might switch back to nonsentience once the interaction ends.
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn’t you notice that? Kind of a long-duration Turing test.
Hm, I wonder what the best place to find really happy people is?
Could you elaborate whether you mean in general, in simulations or elsewhere? And how this related to my comment?
The thought was to induce the simulation of good experiences by being in close proximity to happy people.
Ah yes. Interesting idea. But I think it only ‘counts’ if the happyness is conscious. One has to work a bit harder for that.