And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
You probably could/would subjectively end up in a non-looping state. After all, you had to have multiple possible entries into the loop to begin with. Besides, it’s meaningless to say that you go through the loop more than once (remember, your mind can’t distinguish which loop it is in, because it has to loop back around to an initial state).
Whether you have multiple possible entries into the loop is irrelevant, what is important is whether you have possible exits.
As to your second point, does that mean it is ethical to run a simulation of someone being tortured as long as that simulation has already been run sometime in the past?
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.
How do you figure? I can see the relationship—the discussion of vanishingly small probabilities. The difference, however, is that Pascal’s Mugging attempts to apply those small probabilities to deciding a specific action.
There is a categorical difference, I feel, between stating that a thing could occur and stating that a thing is occurring. After all; if there were an infinite number of Muggings, at least one of them conceivably could be actually telling the truth.
And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
s/eternally/”the remaining history of the universe”/, then. The problem remains equivalent as a thought-experiment. The point being—as middle-aged suicides themselves demonstrate: there is, at some level, in every conscious agent’s decision-making processes, a continuously ongoing decision as to whether it would be better to continue to exist, or to cease existing.
Being denied that capacity for choice, it seems highly plausible that over a long enough timeline, nearly anyone should eventually have a problem with this state of affairs.
This is a version of Pascal’s mugging.
And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
What if you get trapped in a loop of mind-states, each horrible state leading to the next until you are back where you started?
You probably could/would subjectively end up in a non-looping state. After all, you had to have multiple possible entries into the loop to begin with. Besides, it’s meaningless to say that you go through the loop more than once (remember, your mind can’t distinguish which loop it is in, because it has to loop back around to an initial state).
Whether you have multiple possible entries into the loop is irrelevant, what is important is whether you have possible exits.
As to your second point, does that mean it is ethical to run a simulation of someone being tortured as long as that simulation has already been run sometime in the past?
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.
How do you figure? I can see the relationship—the discussion of vanishingly small probabilities. The difference, however, is that Pascal’s Mugging attempts to apply those small probabilities to deciding a specific action.
There is a categorical difference, I feel, between stating that a thing could occur and stating that a thing is occurring. After all; if there were an infinite number of Muggings, at least one of them conceivably could be actually telling the truth.
s/eternally/”the remaining history of the universe”/, then. The problem remains equivalent as a thought-experiment. The point being—as middle-aged suicides themselves demonstrate: there is, at some level, in every conscious agent’s decision-making processes, a continuously ongoing decision as to whether it would be better to continue to exist, or to cease existing.
Being denied that capacity for choice, it seems highly plausible that over a long enough timeline, nearly anyone should eventually have a problem with this state of affairs.