Darn, you beat me to it! Given that your decision and others’ decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you “wake up” in.) I had elaborated before about how to apply this reasoning to PD-like problems:
In a world of identical beings, they would all “wake up” from any Prisoner’s Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).
Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.
As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.
Also, if I go the opposite route, and use Schwitzgebel’s model and decision theory, that’s not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50⁄50 other than your vote.
Darn, you beat me to it! Given that your decision and others’ decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you “wake up” in.) I had elaborated before about how to apply this reasoning to PD-like problems:
Also, if I go the opposite route, and use Schwitzgebel’s model and decision theory, that’s not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50⁄50 other than your vote.