But are we also giving Omega the ability to predict the results from the calculator?
I don’t see when it needs that knowledge.
The calculator being deterministic (and so potentially predictable) won’t change the analysis (as long as it’s deterministic in a way uncorrelated with other facts under consideration), but that’s the topic of Counterfactual Mugging, not this post, so I granted even quantum randomness to avoid this discussion.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Omega is powerless here. His presence is irrelevant to the question. Which is why I originally thought you were Sokaling. One shouldn’t multiply Omegas without necessity.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Unpack “transfer information”. If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world, all of this without it being able to predict whether the calculator displays “even” or “odd”. Considering the question in advance of observing the calculator display is not necessary.
If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened; that was the whole point of my question; the one in which I apparently used the word ‘counterfactual’ an excessive number of times.
Let me try again by asking this question: What knowledge does the ‘odd’ Omega need to have so as to write ‘odd’ on the exam paper? Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has. Even if Omega knows whether Q is really odd or even.
Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has.
I don’t know what the first option you listed means, and agree that Omega follows the second.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened
I agree, “actuality” is not a property of possible worlds (if we forget about impossible possible worlds for a moment), but it does make sense to talk about “current observational event” (what we usually call actual reality), and counterfactuals located outside it (where one of the observations went differently). These notions would then be referred to from the context of a particular agent.
I don’t see when it needs that knowledge.
The calculator being deterministic (and so potentially predictable) won’t change the analysis (as long as it’s deterministic in a way uncorrelated with other facts under consideration), but that’s the topic of Counterfactual Mugging, not this post, so I granted even quantum randomness to avoid this discussion.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Omega is powerless here. His presence is irrelevant to the question. Which is why I originally thought you were Sokaling. One shouldn’t multiply Omegas without necessity.
Unpack “transfer information”. If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world, all of this without it being able to predict whether the calculator displays “even” or “odd”. Considering the question in advance of observing the calculator display is not necessary.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened; that was the whole point of my question; the one in which I apparently used the word ‘counterfactual’ an excessive number of times.
Let me try again by asking this question: What knowledge does the ‘odd’ Omega need to have so as to write ‘odd’ on the exam paper? Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has. Even if Omega knows whether Q is really odd or even.
I don’t know what the first option you listed means, and agree that Omega follows the second.
I agree, “actuality” is not a property of possible worlds (if we forget about impossible possible worlds for a moment), but it does make sense to talk about “current observational event” (what we usually call actual reality), and counterfactuals located outside it (where one of the observations went differently). These notions would then be referred to from the context of a particular agent.