Completly identical, with exactly the same sensory inputs, no interaction between each other, … ? Would be very hard to implement
I don’t see what the implementation difficulty would be. You just run exactly the same simulation on identical hardware and feed it identical inputs. There are technical issues, but they have known solutions: we already do this today. If the simulation requires random numbers, they just need to be duplicated as well.
Two completely isolated simulations of the whole virtual world (not too hard, but requires twice as much computing power).
Exactly initial start states (not too hard either).
Synced random number generators between the two simulations (not too hard if you have a central PRNG, can be harder in you have distributed computing).
Eliminate all kind of hardware inducted difference timing : packet lost in a network, ECC error in a ramchip forcing a re-read, … that will skew the simulation regarding to the clock, leading to different results as soon as you’ve parallel processing (threading or multi-process) or anything that is clock-based (that’s starting to be really hard, the only way I know to reliably do it is to forget all parallel processing, run everything sequentially, and not use the clock at all).
Handle hardware fault/replacement in a way that doesn’t affect the simulation at all (other reason that trigger the same issues as above).
Handle software upgrade, if any (but we know that you usually just can’t run software for long without doing upgrade, to handle scaling issues, support newer hardware, fix security bugs, …), so they are done exactly the same way and the same time on both copy (starting to be very hard).
Ensure that there is absolutely no interaction between the real world and the simulation (no admin of the simulation running admin commands that can alter data, no “cyber-attack” be it from a network or physical attack, …)
And probably other reasons I didn’t think about. Running two large-scale computer systems that work exactly the same way is an hard task. Really hard. Similar somehow to keeping quantum entanglement in a large system. The larger your system is, the more complex it is, the longer you run it—the more likely you’ve to get a tiny difference that can the snowball into a big one.
Again, we have the capability to do this today: we can run two fully identical copies of a simulation program on two computers half way around the world and keep the two simulations in perfect synchronization. There are a few difficulties, but they were solved in games a while ago. Starcraft solves this problem every time you play multiplayer, for example.
No not at all—just the input feed from the sensors needs to be duplicated.
Clock issues are not relevant in a proper design because clock time has nothing to do with simulation time and simulation time is perfectly discrete and deterministic.
(4,5,6): These deal with errors and or downtime: the solutions are similar. Errors are handled with error correction and re-running that particular piece of code. Distributed deterministic simulation is well-studied in comp sci. Our worst case fallback in this use case is also somewhat easier—we can just pause both simulations until the error or downtime issue is handled.
(7). This is important even if you aren’t running two copies.
Running two large-scale computer systems that work exactly the same way is an hard task. Really hard. Similar somehow to keeping quantum entanglement in a large system.
Not even remotely close. One is part of current technical practice, the other is a distant research goal.
this argument hinges on how fault tolerant you regard identity to be. You can not isolate the computers from all error. Let’s say part of the error correction involves comparing the states of the two computers, would any correction be “death” for one of the simulated you’s?
bottom line: not interesting, revisit question when we have more data.
I don’t see what the implementation difficulty would be. You just run exactly the same simulation on identical hardware and feed it identical inputs. There are technical issues, but they have known solutions: we already do this today. If the simulation requires random numbers, they just need to be duplicated as well.
You need :
Two completely isolated simulations of the whole virtual world (not too hard, but requires twice as much computing power).
Exactly initial start states (not too hard either).
Synced random number generators between the two simulations (not too hard if you have a central PRNG, can be harder in you have distributed computing).
Eliminate all kind of hardware inducted difference timing : packet lost in a network, ECC error in a ramchip forcing a re-read, … that will skew the simulation regarding to the clock, leading to different results as soon as you’ve parallel processing (threading or multi-process) or anything that is clock-based (that’s starting to be really hard, the only way I know to reliably do it is to forget all parallel processing, run everything sequentially, and not use the clock at all).
Handle hardware fault/replacement in a way that doesn’t affect the simulation at all (other reason that trigger the same issues as above).
Handle software upgrade, if any (but we know that you usually just can’t run software for long without doing upgrade, to handle scaling issues, support newer hardware, fix security bugs, …), so they are done exactly the same way and the same time on both copy (starting to be very hard).
Ensure that there is absolutely no interaction between the real world and the simulation (no admin of the simulation running admin commands that can alter data, no “cyber-attack” be it from a network or physical attack, …)
And probably other reasons I didn’t think about. Running two large-scale computer systems that work exactly the same way is an hard task. Really hard. Similar somehow to keeping quantum entanglement in a large system. The larger your system is, the more complex it is, the longer you run it—the more likely you’ve to get a tiny difference that can the snowball into a big one.
Again, we have the capability to do this today: we can run two fully identical copies of a simulation program on two computers half way around the world and keep the two simulations in perfect synchronization. There are a few difficulties, but they were solved in games a while ago. Starcraft solves this problem every time you play multiplayer, for example.
No not at all—just the input feed from the sensors needs to be duplicated.
Clock issues are not relevant in a proper design because clock time has nothing to do with simulation time and simulation time is perfectly discrete and deterministic.
(4,5,6): These deal with errors and or downtime: the solutions are similar. Errors are handled with error correction and re-running that particular piece of code. Distributed deterministic simulation is well-studied in comp sci. Our worst case fallback in this use case is also somewhat easier—we can just pause both simulations until the error or downtime issue is handled.
(7). This is important even if you aren’t running two copies.
Not even remotely close. One is part of current technical practice, the other is a distant research goal.
this argument hinges on how fault tolerant you regard identity to be. You can not isolate the computers from all error. Let’s say part of the error correction involves comparing the states of the two computers, would any correction be “death” for one of the simulated you’s?
bottom line: not interesting, revisit question when we have more data.