She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote
Why is this not a confusion? It seems on the face of it that since voters’ decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously.
Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters? If so, and if we can expect about equal amounts of both, doesn’t that produce the same problem?
I feel genuinely guilty about prop 19′s failure precisely because the reason for my failure to vote—general procrastination and lack of organization resulting in my not registering in time—was probably correlated with similar failures by others on my side of the issue.
That’s probably a special case though.
(ETA for non-Californians: Prop 19 was a proposal to legalize the use of marijuana)
There are asymmetric versions, too: for instance, if you choose not to vote out of lack of enthusiasm, you cede the field to people who are more enthusiastic about their candidate. This effect would help candidates with special-interest appeal (a smaller base of more enthusiastic voters) against candidates with more general (but weaker) appeal.
Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters?
For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won’t be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.
This additionally places into question one vote as a lower estimate of influence of your decision, making it an outright useless figure.
Right, I agree with that. But let’s say I’m a Democrat. If I choose to go, maybe a thousand Democrats and a thousand Republicans all choose to go, for a net gain of zero. If I choose to stay home, a thousand Democrats and a thousand Republicans choose to stay home, for a net gain of zero.
Either way, the net gain is zero. So why bother voting?
If it’s common knowledge that every eligible voter is using UDT I think the outcome might be that everyone chooses a mixed strategy: vote with probability p (for some fairly small p like < 0.1) and stay home with probability 1-p. This way, the outcome of the election is almost certainly the same as if everyone votes, but its cost is much smaller.
Caveats: I don’t know how to derive this mathematically from the stated assumption, and I have little idea how to apply this type of reasoning to humans. Actually it still seems plausible to me that E(total number of votes | I vote) - E(total number of votes | I don’t vote) is near 1 and therefore CDT-type (“deciding vote”) reasoning is a good approximation for my actual situation.
I don’t think privileging the hypothesis is the problem here. While it is unlikely that the acausal effects on Republicans and Democrats is exactly balanced (a hypothesis we should not be privileging), without assymetric information about them, we should assume that any probability of a given margin of more Republicans being influenced would be balanced by an equal probability of the same margin of more Democrats being influenced, so the expected influence on each group is still balanced.
Yes, see my reply to Larks. The problem was that Yvain’s comment doesn’t admit the interpretation of referring to zero expected effect. And having exactly balanced influences is a very narrow hypothesis with no support, hence unduly privileged.
If, as in this hypothetical, Yvain is a Democrat, then he is more representative of Democrats than Republicans, and therefor is likely to acausally influence more Democrats.
I could see that as being true if my reason for not voting was “Obama just doesn’t inspire me that much,” but what about in the originally mentioned case where my reason is bad weather? Do you think Democrats and Republicans are different enough that their algorithms for dealing with bad weather differ in a consistent way?
“Obama just doesn’t inspire me that much” and “bad weather” are simpified stories someone might tell to explain their behavior. But “Does Obama inspire me enough to deal with the bad weather?” is closer to how the decision is made.
Do you think Democrats and Republicans are different enough that their algorithms for dealing with bad weather differ in a consistent way?
I would not rule out their being some correlation with willingness to go out in bad weather.
I could imagine a world where Democrats and Rupublicans really run the same algorithm with different parameters pointing to different political entities that are close analogues of each other, in which case a Democrat would acausally influence Republicans as much as Democrats. But this does seem to be a highly specific hypothesis, that I would not favor, and does not fully fit in with actual assymetries that can be observed.
With a very big ceteris paribus in there somewhere. (The relevance Yvain being a Democrat is that we may expect other people with the same political affiliation to be more likely to also share voting technique. Apart from making that inference possible the similarity is not relevant.)
These hypotheses are different: that you will have zero effect, and that you will have some effect of unknown sign and magnitude, with expected value of zero. I object to the former, not necessarily to the latter (note that the expected absolute value of the effect is bound to be non-zero in the latter case). To give an estimate of the nature of the effect, we need to consider specific reasons that moved your decision.
For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won’t be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.
Could you please tell me what “to establish” means in the last sentence?
(Your comment made me spit out my tea. I know almost nothing about U/TDT.)
There’s new grass planted in your apartment block’s front yard. If everyone walks over it, it will die, but if just a couple of people walk over it, it’ll be okay. Your way would be shorter if you walked over the grass. (Tragedy of the commons situation),
And you’ve read about funky decision theories on Less Wrong, and decide to avoid the grass because you’ve decided that you follow TDT.
Does this acausally make the other residents avoid the grass as well because they decide in approximately the same way when encountering the grass, or does it not because they haven’t even heard of TDT?
I’m saving the decision theory apparatus (which actually multiplies the expected payoff of both political and non-political altruistic expenditures) for a later post. I couldn’t fit everything into the first one.
I don’t think it multiplies the expected payoff for both in the same way. Some Bostromian division-of-responsibility principle should apply in both cases. The apparent gains are from the probability of making an important shift via group action where individual action would be unlikely to go over a tipping point, not because you’re multiplying by the number of people involved.
But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”, their algorithm is “I’ll only vote if lots of other people do”, and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it’d be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of “deciding vote” is only a very little bit confused. Am I wrong?
Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent’s choices depend. Or it can just be someone else like you, who’s likely to some degree to decide the same thing for some of the same reasons.
“Acausal influence” is superficially a contradiction, and this phrase deserves skeptical scrutiny.
The only sort of “influence” I can think of, that might defensibly be described as acausal, is the “influence” of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a “causal” interpretation of where the representation’s properties came from—it’s just that these would be “logical causes”. A representation of the Death Star has some of its properties because otherwise it wouldn’t be a representation of the Death Star; it would be a representation of something else, or not a representation at all.
There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have “logical” causes. I don’t know how to think about these logical causes correctly—it doesn’t seem right to say that they are caused by objects in other possible worlds, for example. But isn’t the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?
I don’t think it’s worthwhile to fight the terminology: ‘acausal’ makes sense as opposed to ‘causal’ as in ‘causal decision theory’. I think it’s pretty sensible and defensible, even if ‘timeless’ might’ve been a better choice.
No, the more I think about it, the more I think there is a serious problem here.
“Superrationality” is just a situation in which a certain bias—a certain deviation from actual rationality—is rewarded, when enough other people have the same bias. If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!
Superrationality is either an attempt to solve coordination problems through magical thinking, or it’s a fancy name for visibly favoring altruism in the hope that others will too, or it’s a preference for altruistic terminal values disguised as an appeal to rational self-interest.
If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority.
Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.
Quoting from Wikipedia because I have no real expertise on decision theory:
Note that a superrational player playing against a game-theoretic rational player will defect, since the strategy only assumes that the superrational players will agree. A superrational player playing against a player of uncertain superrationality will sometimes defect and sometimes cooperate.
How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?
But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”
It’s only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.
I agree… tentatively. I haven’t yet spent much time considering the idea of acausal influence in its most general form, but I’m not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it’s election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.
I realize that sounds exactly like the argument for two-boxing, but I’m not convinced the causal structure is similar enough for the analogy to be valid.
(I’ve previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)
If you acausally influence other people to vote, you’ll also acausally influence them to spend time doing so. (And since they’re like you, their time is as valuable as yours.) To a first approximation, the expected cost and benefit are proportional to the naïve (ignoring acausal influence) estimate. So the question of whether it’s worth the effort should come out the same.
Darn, you beat me to it! Given that your decision and others’ decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you “wake up” in.) I had elaborated before about how to apply this reasoning to PD-like problems:
In a world of identical beings, they would all “wake up” from any Prisoner’s Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).
Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.
As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.
Also, if I go the opposite route, and use Schwitzgebel’s model and decision theory, that’s not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50⁄50 other than your vote.
Why is this not a confusion? It seems on the face of it that since voters’ decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously.
Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters? If so, and if we can expect about equal amounts of both, doesn’t that produce the same problem?
I feel genuinely guilty about prop 19′s failure precisely because the reason for my failure to vote—general procrastination and lack of organization resulting in my not registering in time—was probably correlated with similar failures by others on my side of the issue.
That’s probably a special case though.
(ETA for non-Californians: Prop 19 was a proposal to legalize the use of marijuana)
There are asymmetric versions, too: for instance, if you choose not to vote out of lack of enthusiasm, you cede the field to people who are more enthusiastic about their candidate. This effect would help candidates with special-interest appeal (a smaller base of more enthusiastic voters) against candidates with more general (but weaker) appeal.
For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won’t be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.
This additionally places into question one vote as a lower estimate of influence of your decision, making it an outright useless figure.
Right, I agree with that. But let’s say I’m a Democrat. If I choose to go, maybe a thousand Democrats and a thousand Republicans all choose to go, for a net gain of zero. If I choose to stay home, a thousand Democrats and a thousand Republicans choose to stay home, for a net gain of zero.
Either way, the net gain is zero. So why bother voting?
If it’s common knowledge that every eligible voter is using UDT I think the outcome might be that everyone chooses a mixed strategy: vote with probability p (for some fairly small p like < 0.1) and stay home with probability 1-p. This way, the outcome of the election is almost certainly the same as if everyone votes, but its cost is much smaller.
Caveats: I don’t know how to derive this mathematically from the stated assumption, and I have little idea how to apply this type of reasoning to humans. Actually it still seems plausible to me that E(total number of votes | I vote) - E(total number of votes | I don’t vote) is near 1 and therefore CDT-type (“deciding vote”) reasoning is a good approximation for my actual situation.
Why do you privilege that hypothesis?
I don’t think privileging the hypothesis is the problem here. While it is unlikely that the acausal effects on Republicans and Democrats is exactly balanced (a hypothesis we should not be privileging), without assymetric information about them, we should assume that any probability of a given margin of more Republicans being influenced would be balanced by an equal probability of the same margin of more Democrats being influenced, so the expected influence on each group is still balanced.
The problem is that assymetric information is being ignored.
Yes, see my reply to Larks. The problem was that Yvain’s comment doesn’t admit the interpretation of referring to zero expected effect. And having exactly balanced influences is a very narrow hypothesis with no support, hence unduly privileged.
The fact that everyone else on the thread interpreted it that way shows that it does.
If that was the intended interpretation, mystery solved!
Because the distribution of Democrats and Republicans you acausally influence is symetric around 1:1
If, as in this hypothetical, Yvain is a Democrat, then he is more representative of Democrats than Republicans, and therefor is likely to acausally influence more Democrats.
I could see that as being true if my reason for not voting was “Obama just doesn’t inspire me that much,” but what about in the originally mentioned case where my reason is bad weather? Do you think Democrats and Republicans are different enough that their algorithms for dealing with bad weather differ in a consistent way?
“Obama just doesn’t inspire me that much” and “bad weather” are simpified stories someone might tell to explain their behavior. But “Does Obama inspire me enough to deal with the bad weather?” is closer to how the decision is made.
I would not rule out their being some correlation with willingness to go out in bad weather.
I could imagine a world where Democrats and Rupublicans really run the same algorithm with different parameters pointing to different political entities that are close analogues of each other, in which case a Democrat would acausally influence Republicans as much as Democrats. But this does seem to be a highly specific hypothesis, that I would not favor, and does not fully fit in with actual assymetries that can be observed.
With a very big ceteris paribus in there somewhere. (The relevance Yvain being a Democrat is that we may expect other people with the same political affiliation to be more likely to also share voting technique. Apart from making that inference possible the similarity is not relevant.)
Yes, I realise this. But the difference between Republicans and Democrats is likely to be so small this is small consolation.
On the other hand, there is Democrats and Republicans arguable use different value systems.
These hypotheses are different: that you will have zero effect, and that you will have some effect of unknown sign and magnitude, with expected value of zero. I object to the former, not necessarily to the latter (note that the expected absolute value of the effect is bound to be non-zero in the latter case). To give an estimate of the nature of the effect, we need to consider specific reasons that moved your decision.
Could you please tell me what “to establish” means in the last sentence?
(Your comment made me spit out my tea. I know almost nothing about U/TDT.)
If my decision process uses UDT-type reasoning, do I have a chance of acausally influencing people who don’t know about UDT-type reasoning?
#lesswrong
There’s new grass planted in your apartment block’s front yard. If everyone walks over it, it will die, but if just a couple of people walk over it, it’ll be okay. Your way would be shorter if you walked over the grass. (Tragedy of the commons situation),
And you’ve read about funky decision theories on Less Wrong, and decide to avoid the grass because you’ve decided that you follow TDT.
Does this acausally make the other residents avoid the grass as well because they decide in approximately the same way when encountering the grass, or does it not because they haven’t even heard of TDT?
What if all the residents were LW posters??
One thing I’ve long wondered: in cases like these is TDT equivalent to your mom saying ‘and what if everyone walked on the grass?’
I think that’s exactly what you would go around asking yourself if you were a TDT-using human in a community of TDT-using humans.
No, although it is often used in that sort of way.
This is actually a good question. Gary Drescher seems to think you can, but I think Eliezer is more skeptical.
Is this a topic in Good and Real?
Yes– it’s in the account of ethics, near the end.
I’m saving the decision theory apparatus (which actually multiplies the expected payoff of both political and non-political altruistic expenditures) for a later post. I couldn’t fit everything into the first one.
Then you should’ve made clear that “deciding vote” is actually a lower estimate, and shouldn’t be interpreted as classical “deciding vote”.
I added some clarifications.
Ah, didn’t see this earlier.
I don’t think it multiplies the expected payoff for both in the same way. Some Bostromian division-of-responsibility principle should apply in both cases. The apparent gains are from the probability of making an important shift via group action where individual action would be unlikely to go over a tipping point, not because you’re multiplying by the number of people involved.
How does one go about computing E(total number of votes | I vote) - E(total number of votes | I don’t vote)?
No idea, but “deciding vote” is not it.
Maybe we could ask Omega. He would know.
But my vote doesn’t even acausally affect others’ votes: no one’s thinking “I’ll only vote if Will Newsome does”, their algorithm is “I’ll only vote if lots of other people do”, and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it’d be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of “deciding vote” is only a very little bit confused. Am I wrong?
Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent’s choices depend. Or it can just be someone else like you, who’s likely to some degree to decide the same thing for some of the same reasons.
“Acausal influence” is superficially a contradiction, and this phrase deserves skeptical scrutiny.
The only sort of “influence” I can think of, that might defensibly be described as acausal, is the “influence” of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a “causal” interpretation of where the representation’s properties came from—it’s just that these would be “logical causes”. A representation of the Death Star has some of its properties because otherwise it wouldn’t be a representation of the Death Star; it would be a representation of something else, or not a representation at all.
There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have “logical” causes. I don’t know how to think about these logical causes correctly—it doesn’t seem right to say that they are caused by objects in other possible worlds, for example. But isn’t the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?
I don’t think it’s worthwhile to fight the terminology: ‘acausal’ makes sense as opposed to ‘causal’ as in ‘causal decision theory’. I think it’s pretty sensible and defensible, even if ‘timeless’ might’ve been a better choice.
No, the more I think about it, the more I think there is a serious problem here.
“Superrationality” is just a situation in which a certain bias—a certain deviation from actual rationality—is rewarded, when enough other people have the same bias. If a bunch of people all using a “superrational decision theory” manage to achieve the big collective payoff they sought by cooperating, it’s only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!
Superrationality is either an attempt to solve coordination problems through magical thinking, or it’s a fancy name for visibly favoring altruism in the hope that others will too, or it’s a preference for altruistic terminal values disguised as an appeal to rational self-interest.
Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.
Not in a prisoner’s dilemma.
Quoting from Wikipedia because I have no real expertise on decision theory:
How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?
It’s only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.
I agree… tentatively. I haven’t yet spent much time considering the idea of acausal influence in its most general form, but I’m not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it’s election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.
I realize that sounds exactly like the argument for two-boxing, but I’m not convinced the causal structure is similar enough for the analogy to be valid.
(I’ve previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)
If you acausally influence other people to vote, you’ll also acausally influence them to spend time doing so. (And since they’re like you, their time is as valuable as yours.) To a first approximation, the expected cost and benefit are proportional to the naïve (ignoring acausal influence) estimate. So the question of whether it’s worth the effort should come out the same.
Other people’s time is not as valuable as yours (to you).
Darn, you beat me to it! Given that your decision and others’ decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you “wake up” in.) I had elaborated before about how to apply this reasoning to PD-like problems:
Also, if I go the opposite route, and use Schwitzgebel’s model and decision theory, that’s not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50⁄50 other than your vote.