Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.