“What’s more: such strings can’t be severed. Try, for example, to make the two whiteboards different. Imagine that you’ll get ten million dollars if you succeed. It doesn’t matter: you’ll fail. Your most whimsical impulse, your most intricate mental acrobatics, your special-est snowflake self, will never suffice: you can no more write “up” while he writes “down” than you can floss while the man in the bathroom mirror brushes his teeth. ”
I’d just flip a coin a bunch of times and write its results, or do some similar process to introduce entropy!
But wait, the simulation is set up such that all inputs are identical, including my observation of the coin flip.
In this case, where’s the proof that the two copies of me are actually different entities? How could you prove to either entity that it’s not the same person as the other, without violating the “all inputs are identical” constraint?
If it cannot be proven that they’re separate individuals in a meaningful or useful way, doesn’t the whole thought experiment collapse into “well obviously if I want something written on the whiteboard in my room, I just write it there”? I think that proving to the AI that there were 2 whiteboards and they were separate would itself violate the terms of the experiment.
proving to the AI that there were 2 whiteboards and they were separate
This is a fact about the world, not about the room. I don’t see what the issue is with giving the agent the definition of the world and proving that yes, there are two instances there and there. If the agent knows the room, they can check that it’s the room that is in these two locations, though you would need to stipulate that the presented definition of the world is correct, or that the agent already knew it.
“What’s more: such strings can’t be severed. Try, for example, to make the two whiteboards different. Imagine that you’ll get ten million dollars if you succeed. It doesn’t matter: you’ll fail. Your most whimsical impulse, your most intricate mental acrobatics, your special-est snowflake self, will never suffice: you can no more write “up” while he writes “down” than you can floss while the man in the bathroom mirror brushes his teeth. ”
I’d just flip a coin a bunch of times and write its results, or do some similar process to introduce entropy!
But wait, the simulation is set up such that all inputs are identical, including my observation of the coin flip.
In this case, where’s the proof that the two copies of me are actually different entities? How could you prove to either entity that it’s not the same person as the other, without violating the “all inputs are identical” constraint?
If it cannot be proven that they’re separate individuals in a meaningful or useful way, doesn’t the whole thought experiment collapse into “well obviously if I want something written on the whiteboard in my room, I just write it there”? I think that proving to the AI that there were 2 whiteboards and they were separate would itself violate the terms of the experiment.
This is a fact about the world, not about the room. I don’t see what the issue is with giving the agent the definition of the world and proving that yes, there are two instances there and there. If the agent knows the room, they can check that it’s the room that is in these two locations, though you would need to stipulate that the presented definition of the world is correct, or that the agent already knew it.