Well, from the lack of a reply and the four downvotes, I take it that the question is sincere and that at least four people believe it is meaningful. So, I have two questions:
How many of the people who have responded so far seem to have understood the question?
Suppose (counterfactually) that counterfactual Omega asked counterfactual you what factual Omega should write in the factual test (ignoring what factual you actually does, of course). Should the answer (the instruction to Omega to write either “even” or “odd”) be the opposite in this counterfactual case than in the case you originally presented?
I don’t understand the problem, but it seems that you think that the result on the calculator affects some kind of objective probability that Q is even—a probability that is the same in both factual and counterfactual worlds. It doesn’t, of course. All probability is subjective. Evidence observed in one world has no influence on counterfactual worlds where the evidence did not appear.
But since I suspect you already know this, it seems likely that I simply don’t have a clue what your question was and why you decided to ask it in that way.
Two more questions. As in the original scenario, but instead of an unreliable calculator, you have a reliable (so far) theorem prover. Type in a proposition to be proved and hit the “ProveIt” button, and immediately the display shows “Working”. Then, an unpredictable amount of time later, the display may change to show either “Proven” or “Disproven”. So, the base case here is that you type “Q is even” into the device and hit “ProveIt”. You plan to only allow 5 minutes for the device to find a proof, and then to just guess, but fortunately the display changes to “Proven” in 4 minutes. But then just as you finish writing “Even” on your test paper, Omega appears.
This time, Omega asks you to consider the counterfactual world in which the device still shows “Working” after 5 minutes. Should counter-factual Omega still write “Even” on the test?
In a different Omega-suggested counterfactual world, a black swan flies in the window after 4 1⁄2 minutes and the display shows “Disproven”. You know that this means that either a). Arithmetic is inconsistent. b). The theorem prover device is unreliable. or c). Omega is messing with you. Does thinking about this situation cause you to change your answer to the previous question?
My opinion: Evidence, counter-evidence, and lack of evidence have no effect on the truth of necessary statements. They only impact the subjective probability of those statements. And subjective probabilities cannot flow backward in time (surviving the erasure of the evidence that produced those subjective probabilities). Even Omega cannot mediate this kind of paradoxical information flow.
This time, Omega asks you to consider the counterfactual world in which the device still shows “Working” after 5 minutes. Should counter-factual Omega still write “Even” on the test?
It should write whatever you would write if you observed no answer, in this case we have indifference between the answers (betting with confidence 50%).
In a different Omega-suggested counterfactual world, a black swan flies in the window after 4 1⁄2 minutes and the display shows “Disproven”. You know that this means that either a). Arithmetic is inconsistent. b). The theorem prover device is unreliable. or c). Omega is messing with you.
If device is unreliable, it’s unreliable in your own event in the same sense, so your answer could be wrong (as improbably), so the original solution stands (i.e. you write “odd” in the counterfactual). Even if Omega proves to you that arithmetic is inconsistent, this won’t cause you to abandon morality, just to change the way you use arithmetic. Omega is not lying by problem statement.
And subjective probabilities cannot flow backward in time (surviving the erasure of the evidence that produced those subjective probabilities). Even Omega cannot mediate this kind of paradoxical information flow.
We discussed in the other thread how your description of this idea doesn’t make sense to me. I have no idea what your statement means, so can’t rule whether I disagree with it, but certainly I can’t agree with what I don’t understand.
Ok, so we seem to be in agreement regarding everything except my attempt to capture the rules with the (admittedly meaningless if taken literally) slogan “subjective probabilities cannot flow backward in time”.
It is interesting that neither of us sees any practical difference between necessary facts (the true value of Q) and contingent facts (whether the calculator made a mistake) in this exercise. The reason apparently being that we can only construct counterfactuals on contingent facts (for example, observations). We can’t directly go counterfactual on necessary facts—only on observations that provide evidence regarding necessary facts. But it is impossible for observations to provide so much evidence regarding a necessary fact that we are justified in telling Omega that his counterfactual is impossible.
But that apparently means that dragging Omega into this problem didn’t change anything—his presence just confused people. (I notice that Shokwave—the one person who you claimed had understood the problem—is now saying that the value of Q is different in the counterfactual worlds). I am becoming ever more convinced that allowing Omega into a decision-theory example is as harmful as allowing a GoTo statement into a computer program. But then, as my analogy reveals, I am from a completely different generation.
We can’t directly go counterfactual on necessary facts—only on observations that provide evidence regarding necessary facts.
Yes we can. Omega could offer you to control worlds where Q is actually odd.
I notice that Shokwave—the one person who you claimed had understood the problem—is now saying that the value of Q is different in the counterfactual worlds
Link? The value of Q is uncertain, and this holds in considering either possible observation.
We can’t directly go counterfactual on necessary facts—only on observations that provide evidence regarding necessary facts.
Yes we can. Omega could offer you to control worlds where Q is actually odd.
I want to answer “No he can’t. Not if I am in a world in which Q is actually even. Not if we are talking about the same arithmetic formula Q in each case.” But I’m coming to realize that we may not even be talking the same language. For example, I don’t really understand what is meant by “Omega could offer you to control worlds where ___”. Are you suggesting that Omega could make the offer, though he might not have to deliver anything should such worlds not exist?
I notice that Shokwave … is now saying that the value of Q is different in the counterfactual worlds
Link? The value of Q is uncertain, and this holds in considering either possible observation.
Are you suggesting that Omega could make the offer, though he might not have to deliver anything should such worlds not exist?
Yes. The offer would be, to enact a given property in all possible worlds of specified event. If there are no possible worlds in that event, this requirement is met by doing nothing.
I notice that Shokwave—the one person who you claimed had understood the problem—is now saying that the value of Q is different in the counterfactual worlds
I wish. If I understood the problem, I would be solving it. As far as I’ve noticed, he claimed I had the updateless analysis mostly right.
Suppose (counterfactually) that counterfactual Omega asked counterfactual you what factual Omega should write in the factual test (ignoring what factual you actually does, of course). Should the answer (the instruction to Omega to write either “even” or “odd”) be the opposite in this counterfactual case than it the case you originally presented?
So far, shokwave clearly gets it. Compare to any other sophisticated question asked in a language you aren’t familiar with. Here, you need to be sufficiently comfortable with counterfactuals, for the number of its usages in a problem statement not to act as a pattern for ridiculousness.
it seems that you think that the result on the calculator affects some kind of objective probability that Q is even—a probability that is the same in both factual and counterfactual worlds.
I don’t think that.
All probability is subjective. Evidence observed in one world has no influence on counterfactual worlds where the evidence did not appear.
I don’t see how “subjective” helps here. It’s not clear what sense of “influence” you intend.
Here, you need to be sufficiently comfortable with counterfactuals, for the number of its usages in a problem statement not to act as a pattern for ridiculousness.
I fully agree. Which is why I find it surprising that you did not attempt to answer the question.
It’s not clear what sense of “influence” you intend.
I intended to include whatever causes your answer to Omega in this world to make a difference in what counterfactual Omega writes on the paper in the counterfactual world.
As in Newcomb’s problem, or Counterfactual Mugging, counterfactual Omega can predict your command (made in “actual” world in response to “actual” observations, including observing “actual” Omega), while remaining in the counterfactual world. It’s your decision, which is a logical fact, that controls counterfactual Omega’s actions.
I understand that Omega (before the world-split) can predict what I will do for each possible result from the calculator. As well as predicting my response to all kinds of logic puzzles. And that this ability of Omega to predict is the thing that permits this spooky kind of acausal influence or interaction between possible worlds.
But are we also giving Omega the ability to predict the results from the calculator? If so, I think that the whole meaning of the word ‘counterfactual’ is brought into question.
But are we also giving Omega the ability to predict the results from the calculator?
I don’t see when it needs that knowledge.
The calculator being deterministic (and so potentially predictable) won’t change the analysis (as long as it’s deterministic in a way uncorrelated with other facts under consideration), but that’s the topic of Counterfactual Mugging, not this post, so I granted even quantum randomness to avoid this discussion.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Omega is powerless here. His presence is irrelevant to the question. Which is why I originally thought you were Sokaling. One shouldn’t multiply Omegas without necessity.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Unpack “transfer information”. If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world, all of this without it being able to predict whether the calculator displays “even” or “odd”. Considering the question in advance of observing the calculator display is not necessary.
If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened; that was the whole point of my question; the one in which I apparently used the word ‘counterfactual’ an excessive number of times.
Let me try again by asking this question: What knowledge does the ‘odd’ Omega need to have so as to write ‘odd’ on the exam paper? Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has. Even if Omega knows whether Q is really odd or even.
Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has.
I don’t know what the first option you listed means, and agree that Omega follows the second.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened
I agree, “actuality” is not a property of possible worlds (if we forget about impossible possible worlds for a moment), but it does make sense to talk about “current observational event” (what we usually call actual reality), and counterfactuals located outside it (where one of the observations went differently). These notions would then be referred to from the context of a particular agent.
You are Sokaling us, right?
Well, from the lack of a reply and the four downvotes, I take it that the question is sincere and that at least four people believe it is meaningful. So, I have two questions:
How many of the people who have responded so far seem to have understood the question?
Suppose (counterfactually) that counterfactual Omega asked counterfactual you what factual Omega should write in the factual test (ignoring what factual you actually does, of course). Should the answer (the instruction to Omega to write either “even” or “odd”) be the opposite in this counterfactual case than in the case you originally presented?
I don’t understand the problem, but it seems that you think that the result on the calculator affects some kind of objective probability that Q is even—a probability that is the same in both factual and counterfactual worlds. It doesn’t, of course. All probability is subjective. Evidence observed in one world has no influence on counterfactual worlds where the evidence did not appear.
But since I suspect you already know this, it seems likely that I simply don’t have a clue what your question was and why you decided to ask it in that way.
Two more questions. As in the original scenario, but instead of an unreliable calculator, you have a reliable (so far) theorem prover. Type in a proposition to be proved and hit the “ProveIt” button, and immediately the display shows “Working”. Then, an unpredictable amount of time later, the display may change to show either “Proven” or “Disproven”. So, the base case here is that you type “Q is even” into the device and hit “ProveIt”. You plan to only allow 5 minutes for the device to find a proof, and then to just guess, but fortunately the display changes to “Proven” in 4 minutes. But then just as you finish writing “Even” on your test paper, Omega appears.
This time, Omega asks you to consider the counterfactual world in which the device still shows “Working” after 5 minutes. Should counter-factual Omega still write “Even” on the test?
In a different Omega-suggested counterfactual world, a black swan flies in the window after 4 1⁄2 minutes and the display shows “Disproven”. You know that this means that either a). Arithmetic is inconsistent. b). The theorem prover device is unreliable. or c). Omega is messing with you. Does thinking about this situation cause you to change your answer to the previous question?
My opinion: Evidence, counter-evidence, and lack of evidence have no effect on the truth of necessary statements. They only impact the subjective probability of those statements. And subjective probabilities cannot flow backward in time (surviving the erasure of the evidence that produced those subjective probabilities). Even Omega cannot mediate this kind of paradoxical information flow.
It should write whatever you would write if you observed no answer, in this case we have indifference between the answers (betting with confidence 50%).
If device is unreliable, it’s unreliable in your own event in the same sense, so your answer could be wrong (as improbably), so the original solution stands (i.e. you write “odd” in the counterfactual). Even if Omega proves to you that arithmetic is inconsistent, this won’t cause you to abandon morality, just to change the way you use arithmetic. Omega is not lying by problem statement.
We discussed in the other thread how your description of this idea doesn’t make sense to me. I have no idea what your statement means, so can’t rule whether I disagree with it, but certainly I can’t agree with what I don’t understand.
Ok, so we seem to be in agreement regarding everything except my attempt to capture the rules with the (admittedly meaningless if taken literally) slogan “subjective probabilities cannot flow backward in time”.
It is interesting that neither of us sees any practical difference between necessary facts (the true value of Q) and contingent facts (whether the calculator made a mistake) in this exercise. The reason apparently being that we can only construct counterfactuals on contingent facts (for example, observations). We can’t directly go counterfactual on necessary facts—only on observations that provide evidence regarding necessary facts. But it is impossible for observations to provide so much evidence regarding a necessary fact that we are justified in telling Omega that his counterfactual is impossible.
But that apparently means that dragging Omega into this problem didn’t change anything—his presence just confused people. (I notice that Shokwave—the one person who you claimed had understood the problem—is now saying that the value of Q is different in the counterfactual worlds). I am becoming ever more convinced that allowing Omega into a decision-theory example is as harmful as allowing a GoTo statement into a computer program. But then, as my analogy reveals, I am from a completely different generation.
Yes we can. Omega could offer you to control worlds where Q is actually odd.
Link? The value of Q is uncertain, and this holds in considering either possible observation.
I want to answer “No he can’t. Not if I am in a world in which Q is actually even. Not if we are talking about the same arithmetic formula Q in each case.” But I’m coming to realize that we may not even be talking the same language. For example, I don’t really understand what is meant by “Omega could offer you to control worlds where ___”. Are you suggesting that Omega could make the offer, though he might not have to deliver anything should such worlds not exist?
I was referring to this
Yes. The offer would be, to enact a given property in all possible worlds of specified event. If there are no possible worlds in that event, this requirement is met by doing nothing.
I wish. If I understood the problem, I would be solving it. As far as I’ve noticed, he claimed I had the updateless analysis mostly right.
So far, shokwave clearly gets it. Compare to any other sophisticated question asked in a language you aren’t familiar with. Here, you need to be sufficiently comfortable with counterfactuals, for the number of its usages in a problem statement not to act as a pattern for ridiculousness.
I don’t think that.
I don’t see how “subjective” helps here. It’s not clear what sense of “influence” you intend.
I fully agree. Which is why I find it surprising that you did not attempt to answer the question.
I intended to include whatever causes your answer to Omega in this world to make a difference in what counterfactual Omega writes on the paper in the counterfactual world.
As in Newcomb’s problem, or Counterfactual Mugging, counterfactual Omega can predict your command (made in “actual” world in response to “actual” observations, including observing “actual” Omega), while remaining in the counterfactual world. It’s your decision, which is a logical fact, that controls counterfactual Omega’s actions.
I understand that Omega (before the world-split) can predict what I will do for each possible result from the calculator. As well as predicting my response to all kinds of logic puzzles. And that this ability of Omega to predict is the thing that permits this spooky kind of acausal influence or interaction between possible worlds.
But are we also giving Omega the ability to predict the results from the calculator? If so, I think that the whole meaning of the word ‘counterfactual’ is brought into question.
I don’t see when it needs that knowledge.
The calculator being deterministic (and so potentially predictable) won’t change the analysis (as long as it’s deterministic in a way uncorrelated with other facts under consideration), but that’s the topic of Counterfactual Mugging, not this post, so I granted even quantum randomness to avoid this discussion.
My point is that Omega, before the world split, knows what I will do should the calculator return “even”. And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the “even” from me in the “even” world to the paper in the “odd” world.
Omega is powerless here. His presence is irrelevant to the question. Which is why I originally thought you were Sokaling. One shouldn’t multiply Omegas without necessity.
Unpack “transfer information”. If Omega in “odd” world knows what you’d answer should the calculator return “even”, it can use this fact to control things in its own “odd” world, all of this without it being able to predict whether the calculator displays “even” or “odd”. Considering the question in advance of observing the calculator display is not necessary.
Yes, and Omega in “even” world knows all about what would have happened in “odd” world.
But neither Omega knows what “really” happened; that was the whole point of my question; the one in which I apparently used the word ‘counterfactual’ an excessive number of times.
Let me try again by asking this question: What knowledge does the ‘odd’ Omega need to have so as to write ‘odd’ on the exam paper? Does he need to know (subject says to write ‘odd’ & subject sees ‘even’ on calculator)? Or does he instead need to know (subject says to write ‘odd’ | subject sees ‘even’ on calculator)? Because I am claiming that the two are different and that the second is all that Omega has. Even if Omega knows whether Q is really odd or even.
I don’t know what the first option you listed means, and agree that Omega follows the second.
I agree, “actuality” is not a property of possible worlds (if we forget about impossible possible worlds for a moment), but it does make sense to talk about “current observational event” (what we usually call actual reality), and counterfactuals located outside it (where one of the observations went differently). These notions would then be referred to from the context of a particular agent.