But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”, their algorithm is “I’ll only vote if lots of other people do”, and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it’d be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of “deciding vote” is only a very little bit confused. Am I wrong?
Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent’s choices depend. Or it can just be someone else like you, who’s likely to some degree to decide the same thing for some of the same reasons.
“Acausal influence” is superficially a contradiction, and this phrase deserves skeptical scrutiny.
The only sort of “influence” I can think of, that might defensibly be described as acausal, is the “influence” of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a “causal” interpretation of where the representation’s properties came from—it’s just that these would be “logical causes”. A representation of the Death Star has some of its properties because otherwise it wouldn’t be a representation of the Death Star; it would be a representation of something else, or not a representation at all.
There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have “logical” causes. I don’t know how to think about these logical causes correctly—it doesn’t seem right to say that they are caused by objects in other possible worlds, for example. But isn’t the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?
I don’t think it’s worthwhile to fight the terminology: ‘acausal’ makes sense as opposed to ‘causal’ as in ‘causal decision theory’. I think it’s pretty sensible and defensible, even if ‘timeless’ might’ve been a better choice.
No, the more I think about it, the more I think there is a serious problem here.
“Superrationality” is just a situation in which a certain bias—a certain deviation from actual rationality—is rewarded, when enough other people have the same bias. If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!
Superrationality is either an attempt to solve coordination problems through magical thinking, or it’s a fancy name for visibly favoring altruism in the hope that others will too, or it’s a preference for altruistic terminal values disguised as an appeal to rational self-interest.
If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority.
Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.
Quoting from Wikipedia because I have no real expertise on decision theory:
Note that a superrational player playing against a game-theoretic rational player will defect, since the strategy only assumes that the superrational players will agree. A superrational player playing against a player of uncertain superrationality will sometimes defect and sometimes cooperate.
How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?
But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”
It’s only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.
I agree… tentatively. I haven’t yet spent much time considering the idea of acausal influence in its most general form, but I’m not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it’s election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.
I realize that sounds exactly like the argument for two-boxing, but I’m not convinced the causal structure is similar enough for the analogy to be valid.
(I’ve previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)
But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”, their algorithm is “I’ll only vote if lots of other people do”, and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it’d be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of “deciding vote” is only a very little bit confused. Am I wrong?
Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent’s choices depend. Or it can just be someone else like you, who’s likely to some degree to decide the same thing for some of the same reasons.
“Acausal influence” is superficially a contradiction, and this phrase deserves skeptical scrutiny.
The only sort of “influence” I can think of, that might defensibly be described as acausal, is the “influence” of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a “causal” interpretation of where the representation’s properties came from—it’s just that these would be “logical causes”. A representation of the Death Star has some of its properties because otherwise it wouldn’t be a representation of the Death Star; it would be a representation of something else, or not a representation at all.
There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have “logical” causes. I don’t know how to think about these logical causes correctly—it doesn’t seem right to say that they are caused by objects in other possible worlds, for example. But isn’t the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?
I don’t think it’s worthwhile to fight the terminology: ‘acausal’ makes sense as opposed to ‘causal’ as in ‘causal decision theory’. I think it’s pretty sensible and defensible, even if ‘timeless’ might’ve been a better choice.
No, the more I think about it, the more I think there is a serious problem here.
“Superrationality” is just a situation in which a certain bias—a certain deviation from actual rationality—is rewarded, when enough other people have the same bias. If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!
Superrationality is either an attempt to solve coordination problems through magical thinking, or it’s a fancy name for visibly favoring altruism in the hope that others will too, or it’s a preference for altruistic terminal values disguised as an appeal to rational self-interest.
Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.
Not in a prisoner’s dilemma.
Quoting from Wikipedia because I have no real expertise on decision theory:
How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?
It’s only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.
I agree… tentatively. I haven’t yet spent much time considering the idea of acausal influence in its most general form, but I’m not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it’s election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.
I realize that sounds exactly like the argument for two-boxing, but I’m not convinced the causal structure is similar enough for the analogy to be valid.
(I’ve previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)