I’ll be honest, this sentence confuses me. I don’t know what to make of it.
Maybe I was mixing two different ideas together here.
One is about dualism, the assumption that the mind can be treated as a magic box that takes in only sensory inputs and outputs only motor signals to the muscles. The way we normally think about decision making is that our thoughts affect our subsequent decisions, which affect our subsequent actions, and those actions affect what happens to us. Causal decision theory is appropriate for situations that follow this pattern (causal, because the decision causes the action which causes the result). All you have to worry about when you make decisions is what effect your actions will have. But imagine that someone could see inside your brain and decipher what you’re thinking about. Now even before you’ve decided what to do, your thoughts have affected the person reading your mind. They can react to what you were thinking in a way that affects you, even without you taking any action. The magic box assumption is broken because your mind has let something leak out besides muscle signals. Now, when you make a decision about how to act, you have to take into account not only how those actions will affect the world, but also how the thought process behind them will affect the mind-reader who is observing them. (In the OP, the polygraph plays the role of mind reader; in Newcomb’s problem, the perfect predictor does.)
The other idea, which is related, is that your thought process may “affect” the world non-causally and even backwards in time. I use the scare quotes because of course if it’s not causal it’s not really affecting things—it’s really just a correlation. But there are hypotheticals where it could seem a lot like a causal effect because the correlation with your thought process is perfect or near-perfect. The twin prisoner’s dilemma is a good example of this. It relies on knowing that there’s a perfect copy of yourself out in the world. Since it’s a perfect copy (and the setup of the problem is symmetric; you and the copy both encounter identical situations), you know that the copy will decide whatever you decide. This is true even if the copy makes its decision before you make yours. If you decide to cooperate, then you will find that it already cooperated. Likewise in Newcomb’s problem: time doesn’t have to be a puddle in order for one-boxing to make sense. You cannot cause Omega to predict that you will one-box, because it already happened; but if you decide to one-box, then you always were the kind of person who would decide to one-box in this situation—effectively Omega had a near-perfect copy of you that it could observe when it made the prediction, even if the copy was just in its head, and just like in the twin prisoner’s dilemma, that copy would have decided whatever you end up deciding. By choosing the decision process used by you and all copies of you and perfect predictions of you, you constrain the past decisions of those copies, which may in turn causally affect what situations you encounter.
Maybe I was mixing two different ideas together here.
One is about dualism, the assumption that the mind can be treated as a magic box that takes in only sensory inputs and outputs only motor signals to the muscles. The way we normally think about decision making is that our thoughts affect our subsequent decisions, which affect our subsequent actions, and those actions affect what happens to us. Causal decision theory is appropriate for situations that follow this pattern (causal, because the decision causes the action which causes the result). All you have to worry about when you make decisions is what effect your actions will have. But imagine that someone could see inside your brain and decipher what you’re thinking about. Now even before you’ve decided what to do, your thoughts have affected the person reading your mind. They can react to what you were thinking in a way that affects you, even without you taking any action. The magic box assumption is broken because your mind has let something leak out besides muscle signals. Now, when you make a decision about how to act, you have to take into account not only how those actions will affect the world, but also how the thought process behind them will affect the mind-reader who is observing them. (In the OP, the polygraph plays the role of mind reader; in Newcomb’s problem, the perfect predictor does.)
The other idea, which is related, is that your thought process may “affect” the world non-causally and even backwards in time. I use the scare quotes because of course if it’s not causal it’s not really affecting things—it’s really just a correlation. But there are hypotheticals where it could seem a lot like a causal effect because the correlation with your thought process is perfect or near-perfect. The twin prisoner’s dilemma is a good example of this. It relies on knowing that there’s a perfect copy of yourself out in the world. Since it’s a perfect copy (and the setup of the problem is symmetric; you and the copy both encounter identical situations), you know that the copy will decide whatever you decide. This is true even if the copy makes its decision before you make yours. If you decide to cooperate, then you will find that it already cooperated. Likewise in Newcomb’s problem: time doesn’t have to be a puddle in order for one-boxing to make sense. You cannot cause Omega to predict that you will one-box, because it already happened; but if you decide to one-box, then you always were the kind of person who would decide to one-box in this situation—effectively Omega had a near-perfect copy of you that it could observe when it made the prediction, even if the copy was just in its head, and just like in the twin prisoner’s dilemma, that copy would have decided whatever you end up deciding. By choosing the decision process used by you and all copies of you and perfect predictions of you, you constrain the past decisions of those copies, which may in turn causally affect what situations you encounter.
Thank you for the reply—I appreciate the time taken.
I’ll have more of a think ….