But if you one-box in Newcomb’s Problem, you should take these answers more literally. The kinds of backwards causal arrows you draw are the same.
But keep in mind that this kind of control, or “backwards causality”, is all about your map, not the territory, more precisely it’s about your state of logical uncertainty and not about what the definitions you have logically imply. If you already know what the state (probability) of the thing you purport to control is, then you can’t control it.
In this manner, you might have weak control over your own evolutionary adaptations (i.e. over past evolution) by having more or less children, if the regularity that links your behavior to past evolution is all you know, and that you do know of such a link (if your decisions are made using advanced considerations which were not considered by your ancestors, then you can’t control evolution this way). But as you learn more, you control less, or alternatively, you discover that you actually didn’t have any control after all.
So this kind of control is often strictly illusory, and the only reason to take it seriously, to actually try to exert it, is that at that moment you honestly don’t know whether it is. If you act on simple enough considerations, that were indeed instantiated in the past, then it might well not be illusory, but considering how complicated human mind is, that would be rare, and so the extent of your control would be low.
For example, to what extent do you control other people during voting? Only to the extent your own resolution to vote so and so controls your anticipation of other people voting similarly, after you take into account all you know about other people independently of your resolution to vote in a certain way. This might in practice be not very much, you know a lot about other people already, without assuming your own decision, and it’s hard to (logically) connect others’ actions to your own decision.
But keep in mind that this kind of control, or “backwards causality”, is all about your map, not the territory, more precisely it’s about your state of logical uncertainty and not about what the definitions you have logically imply. If you already know what the state (probability) of the thing you purport to control is, then you can’t control it.
In this manner, you might have weak control over your own evolutionary adaptations (i.e. over past evolution) by having more or less children, if the regularity that links your behavior to past evolution is all you know, and that you do know of such a link (if your decisions are made using advanced considerations which were not considered by your ancestors, then you can’t control evolution this way). But as you learn more, you control less, or alternatively, you discover that you actually didn’t have any control after all.
So this kind of control is often strictly illusory, and the only reason to take it seriously, to actually try to exert it, is that at that moment you honestly don’t know whether it is. If you act on simple enough considerations, that were indeed instantiated in the past, then it might well not be illusory, but considering how complicated human mind is, that would be rare, and so the extent of your control would be low.
For example, to what extent do you control other people during voting? Only to the extent your own resolution to vote so and so controls your anticipation of other people voting similarly, after you take into account all you know about other people independently of your resolution to vote in a certain way. This might in practice be not very much, you know a lot about other people already, without assuming your own decision, and it’s hard to (logically) connect others’ actions to your own decision.