You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe’s wave function.
A (counterfactual) agent is accelerated (very) rapidly away from you, taking with him someone you care about and leaving someone he cares about. He passes out of your future light cone. Both the agent and your loved one are now an unobservable components of the universe’s wave function. You and the agent have enough information about each other that you can make predictions about each other’s behavior. Each of you can choose to be kind to the loved one of the other (at a slight net cost in utility to yourself and a significant gain to the other) or to exploit them for a slight gain to yourself. You know that the agent behaves according to UDT. Do you exploit the agent’s loved one or cooperate, being kind?
If you corrupt your model of reality such that you believe parts of the universe’s wave function don’t exist when you can not observe them then you will defect. You will be making a mistake. Your policy would make you Lose!
My instinct is to not kill the loved one, but on virtue-ethics grounds, not because of any sort of counterfactual reciprocity argument. My understanding is that UDT is not actually computable. As a result, no possible agent can act as you describe. So this doesn’t seem like a particularly compelling thought experiment.
if I’m deciding what to do with a hostage, it makes no difference what the other party decides. What matters is my judgement of them right before we became causally separated—and I am skeptical that my decision-making after the separation is useful evidence on this point.
More broadly, I can think of lots of reasons to take counterfactual possibilities into account. But none of them require me to say that the counterfactual “really exists”. For instance, I’m worried about people judging me for being reckless, dishonorable, etc. What’s the case where I actually care about non-causal interactions?
My understanding is that UDT is not actually computable. As a result, no possible agent can act as you describe. So this doesn’t seem like a particularly compelling thought experiment.
Are you confusing UDT with AIXI? It is certainly possible for an agent to act as described and the tricky part isn’t anything to do with “UDT” (but rather the possible but difficult task of making the predictions.)
What’s the case where I actually care about non-causal interactions?
The case given is sufficient. Anyone who is capable of one-boxing on Newcomb’s problem will, if consistent, also cooperate with agents that cross out of the future light cone based on utility maximisation grounds given the payoffs described. If they either two box or defect then they are implementing a faulty decision algorithm.
My understanding is that UDT requires agent A to have some prediction for what agent B will do. This is, in general, not computable. (The proof follows from Rice’s theorem.)
Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity. “Unobservable components of the wavefunction”, in the many world sense, are areas where a different decisions was made or a different outcome observed.
In fact, extending it to many worlds actually hurts the point you want to make. The “(counterfactual) agent” makes both decisions (exploit, be kind), and you make both decisions. Further, you can’t win in every world. Consider Newcomb’s problem- even if omega (the predicting agent) is correct at 99.99%, there are worlds where two boxing is a losing proposition (omega got it wrong). In fact,the rule ‘two-box on Newcomb problems’ always creates two worlds- one where you are a winner and one where you are a loser.
So in many worlds, you can’t assert such a policy would be a mistake- in some worlds it is, and in some it isn’t.
Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity.
The hypothetical has nothing to do with quantum mechanics. It was obviously, and explicitly constructed to address the specific claim being replied to, using no set up more complex than physical movement. That claim being:
You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe’s wave function.
It so happens that asr’s reply indicates that our disagreement regarding how to make decisions when dealing with the implied invisible is not limited to quantum mechanical considerations but also applies in this simple case. (Based on that reply) we disagree both on how to make decisions in general and how to account for the implied invisible when making decisions, even when only very mildly unintuitive physics is in play. That being the case, knowing that we additionally disagree about how to handle the implied invisible when considering quantum mechanics is completely unremarkable.
If you’ll notice, he explicitly uses the phrase “unobservable components of the universe’s wavefunction”, and the context is clearly many worlds quantum mechanics. This means your thought experiment is not at all analogous to his statement.
Your implied invisible (observer outside the light cone) is qualitatively very different then his implied invisible (unobservable components of the wavefunction). Your thought experiment shifts the focus by subtly redefining the original statement.
I’m actually with wedrifid here. I think the key point where wedrifid and I disagree is that I don’t believe agents benefit from considering any kind of acausal trade or interaction. And it turns out that if you restrict yourself to physically interacting agents, you don’t have to worry about unobservables. In contrast, if you worry about acausal interactions, it can make sense to worry about them.
Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity.
Special relativity hangs out in a nice, flat, well behaved, Minkowski space where this sort of thing cannot happen. It takes general relativity and specifically a universe with accelerating expansion (such as ours probably is).
It can also happen in flat space, e.g. if you and the other agent are on Rindler trajectories accelerating in opposite directions, then nothing that one of you does can affect the other.
A (counterfactual) agent is accelerated (very) rapidly away from you, taking with him someone you care about and leaving someone he cares about. He passes out of your future light cone. Both the agent and your loved one are now an unobservable components of the universe’s wave function. You and the agent have enough information about each other that you can make predictions about each other’s behavior. Each of you can choose to be kind to the loved one of the other (at a slight net cost in utility to yourself and a significant gain to the other) or to exploit them for a slight gain to yourself. You know that the agent behaves according to UDT. Do you exploit the agent’s loved one or cooperate, being kind?
If you corrupt your model of reality such that you believe parts of the universe’s wave function don’t exist when you can not observe them then you will defect. You will be making a mistake. Your policy would make you Lose!
My instinct is to not kill the loved one, but on virtue-ethics grounds, not because of any sort of counterfactual reciprocity argument. My understanding is that UDT is not actually computable. As a result, no possible agent can act as you describe. So this doesn’t seem like a particularly compelling thought experiment.
if I’m deciding what to do with a hostage, it makes no difference what the other party decides. What matters is my judgement of them right before we became causally separated—and I am skeptical that my decision-making after the separation is useful evidence on this point.
More broadly, I can think of lots of reasons to take counterfactual possibilities into account. But none of them require me to say that the counterfactual “really exists”. For instance, I’m worried about people judging me for being reckless, dishonorable, etc. What’s the case where I actually care about non-causal interactions?
Are you confusing UDT with AIXI? It is certainly possible for an agent to act as described and the tricky part isn’t anything to do with “UDT” (but rather the possible but difficult task of making the predictions.)
The case given is sufficient. Anyone who is capable of one-boxing on Newcomb’s problem will, if consistent, also cooperate with agents that cross out of the future light cone based on utility maximisation grounds given the payoffs described. If they either two box or defect then they are implementing a faulty decision algorithm.
For an example that doesn’t include any potential exploitation of loved ones see Belief in the Implied Invisible.
My understanding is that UDT requires agent A to have some prediction for what agent B will do. This is, in general, not computable. (The proof follows from Rice’s theorem.)
Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity. “Unobservable components of the wavefunction”, in the many world sense, are areas where a different decisions was made or a different outcome observed.
In fact, extending it to many worlds actually hurts the point you want to make. The “(counterfactual) agent” makes both decisions (exploit, be kind), and you make both decisions. Further, you can’t win in every world. Consider Newcomb’s problem- even if omega (the predicting agent) is correct at 99.99%, there are worlds where two boxing is a losing proposition (omega got it wrong). In fact,the rule ‘two-box on Newcomb problems’ always creates two worlds- one where you are a winner and one where you are a loser.
So in many worlds, you can’t assert such a policy would be a mistake- in some worlds it is, and in some it isn’t.
The hypothetical has nothing to do with quantum mechanics. It was obviously, and explicitly constructed to address the specific claim being replied to, using no set up more complex than physical movement. That claim being:
It so happens that asr’s reply indicates that our disagreement regarding how to make decisions when dealing with the implied invisible is not limited to quantum mechanical considerations but also applies in this simple case. (Based on that reply) we disagree both on how to make decisions in general and how to account for the implied invisible when making decisions, even when only very mildly unintuitive physics is in play. That being the case, knowing that we additionally disagree about how to handle the implied invisible when considering quantum mechanics is completely unremarkable.
If you’ll notice, he explicitly uses the phrase “unobservable components of the universe’s wavefunction”, and the context is clearly many worlds quantum mechanics. This means your thought experiment is not at all analogous to his statement.
Your implied invisible (observer outside the light cone) is qualitatively very different then his implied invisible (unobservable components of the wavefunction). Your thought experiment shifts the focus by subtly redefining the original statement.
I’m actually with wedrifid here. I think the key point where wedrifid and I disagree is that I don’t believe agents benefit from considering any kind of acausal trade or interaction. And it turns out that if you restrict yourself to physically interacting agents, you don’t have to worry about unobservables. In contrast, if you worry about acausal interactions, it can make sense to worry about them.
Special relativity hangs out in a nice, flat, well behaved, Minkowski space where this sort of thing cannot happen. It takes general relativity and specifically a universe with accelerating expansion (such as ours probably is).
It can also happen in flat space, e.g. if you and the other agent are on Rindler trajectories accelerating in opposite directions, then nothing that one of you does can affect the other.