In an old post I argued that for acausal coordination reasons it seems as if you should further multiply this value by the number of people in the reference class of those making the decision the same way (discounted by how little you care about strangers vs. yourself). This makes decisions about things that only affect you personally depend on the relative sizes of their reference classes and on total population (greater population shifts focus of the decisions further away from yourself). Your decision inflicts the micromorts not just on yourself, but on all the people in the reference class, for the proportionally greater total number of micromorts that given this consideration turn into actual morts very easily.
The idea doesn’t seem to have taken root, people talk about this argument mostly in the context of voting, where it’s comforting for the argument to hold, even though it seems to apply in general, where it demands monstrous responsibility for every tiny little thing. It’s very suspicious, but I don’t know how to resolve the confusion. Maybe it’s just psychologically unrealistic to follow through in almost all cases where the argument applies, despite its normative correctness.
It feels to me like people in our community aren’t being skeptical enough or pushing back enough on the idea of acausal coordination for humans. I’m kind of confused about this because it seems like a weirder idea and has less good arguments for it than for example the importance of AI risk which does get substantial skepticism and push back.
In an old post I argued that for acausal coordination reasons it seems as if you should further multiply this value by the number of people in the reference class of those making the decision the same way (discounted by how little you care about strangers vs. yourself).
But if “the same way” includes not only the same kind of explicit cost/benefit analysis but also “further multiply this value by the number of people in the reference class of those making the decision the same way”, the number of people in this reference class must be tiny because nobody is doing this for deciding whether to wear bike helmets.
Suppose two people did “further multiply this value by the number of people in the reference class of those making the decision the same way”, but their decision making processes are slightly different, e.g., they use different heuristics to do things like finding sources for the numbers that go into the cost/benefit analysis, I don’t know how to figure out whether they are still in the same reference class, or how to generalize beyond “same reference class” when the agents are humans as opposed to AIs (and even with the latter we don’t have a complete mathematical theory).
people talk about this argument mostly in the context of voting
I’m skeptical about this too. I’m not actually aware of a good argument for acausal coordination in the context of voting. A search on LW yields only this short comment from Eliezer.
Your decision inflicts the micromorts not just on yourself, but on all the people in the reference class, for the proportionally greater total number of micromorts that given this consideration turn into actual morts very easily.
But your decision also causes the corresponding benefits to accrue to all the people in the reference class? So the decision you make should be the same, it just becomes more consequentially important.
The voting case is different because the benefits are superlinear in the number of people you affect (at least up to a point) -- a million people voting the same way as you probably have more than a million times more chance at swinging the election.
ETA: Never mind, misunderstood habryka’s reply, I’m basically saying the same thing. Though I still think that the case for applying the argument to voting is much stronger than the case for applying it in other decisions where benefits are linear.
The absolute size of a reference class only gives the problem statement for an individual decision some altruistic/paternalistic tilt, which can fail to change it. Greater relative size of a reference class increases the decision’s relative importance compared to other decisions, which on the margin should pull some effort away from the other decisions.
That the effective multiplier due to acausal coordination is smaller for non-voting decisions doesn’t inform the question of whether the argument applies to non-voting decisions. The argument may be ignored in the decision algorithm only if the reference class is always small or about the same size for different decisions.
Yeah, I agree with all of that. (I didn’t realize the point about the relative sizes of reference classes until I read your reply to habryka more carefully.)
Perhaps another way to make the point about the argument for voting being stronger is that it affects your decisionmaking even if you are not altruistic. Here by stronger I mean that the argument is “more robust” or “less suspicious”.
Sure, for voting the effect on decision making is greater. I’m just suspicious of this whole idea of acausal impact, and moderate observations about effect size don’t help with that confusion. I don’t think it can apply to voting without applying to other things, so the quantitative distinction doesn’t point in a particular direction on correctness of the overall idea.
Sure, but doesn’t that apply mostly uniformly to all decisions, making tradeoffs mostly the same, just having everything have a larger magnitude? I don’t see why this is the kind of decision that should be more influenced by that line of reasoning than e.g. my heuristics around choosing what to do with my career, how I choose romantic partners, whether to get exercise, etc.
The magnitude depends on the sizes of reference classes, which differ dramatically. So some personal decisions are suddenly much more important than others simply because more people make them, and so you should allocate more resources on deciding those things in particular correctly. Exercise regimen seems like a high acausal impact decision. Another difference is that the goal that the personal decisions pursue shifts from what you want to happen to yourself, to what you want to happen to other people, and this effect increases with population. (Edited the grandparent to express these points more clearly.)
Hmm, I think that’s being too shallow. My decisions aren’t made in isolation, they are the result of applying general principles and broad heuristics. The biggest impact should come other people adopting the same heuristics and principles, not just making the same object level decisions.
But if you spend more time thinking about exercise, that time cost is multiplied greatly. I think this kind of countereffect cancels out every practical argument of this type.
New information argues for a change on the margin, so the new equilibrium is different, though it may not be far away. The arguments are not “cancelled out”, but they do only have bounded impact. Compare with charity evaluation in effective altruism: if we take the impact of certain decisions as sufficiently significant, it calls for their organized study, so that the decisions are no longer made based on first impressions. On the other hand, if there is already enough infrastructure for making good decisions of that type, then significant changes are unnecessary.
In the case of acausal impact, large reference classes imply that at least that many people are already affected, so if organized evaluation of such decisions is feasible to set up, it’s probably already in place without any need for the acausal impact argument. So actual changes are probably in how you pay attention to info that’s already available, not in creating infrastructure for generating better info. On the other hand, a source of info about sizes of reference classes may be useful.
In an old post I argued that for acausal coordination reasons it seems as if you should further multiply this value by the number of people in the reference class of those making the decision the same way (discounted by how little you care about strangers vs. yourself). This makes decisions about things that only affect you personally depend on the relative sizes of their reference classes and on total population (greater population shifts focus of the decisions further away from yourself). Your decision inflicts the micromorts not just on yourself, but on all the people in the reference class, for the proportionally greater total number of micromorts that given this consideration turn into actual morts very easily.
The idea doesn’t seem to have taken root, people talk about this argument mostly in the context of voting, where it’s comforting for the argument to hold, even though it seems to apply in general, where it demands monstrous responsibility for every tiny little thing. It’s very suspicious, but I don’t know how to resolve the confusion. Maybe it’s just psychologically unrealistic to follow through in almost all cases where the argument applies, despite its normative correctness.
It feels to me like people in our community aren’t being skeptical enough or pushing back enough on the idea of acausal coordination for humans. I’m kind of confused about this because it seems like a weirder idea and has less good arguments for it than for example the importance of AI risk which does get substantial skepticism and push back.
But if “the same way” includes not only the same kind of explicit cost/benefit analysis but also “further multiply this value by the number of people in the reference class of those making the decision the same way”, the number of people in this reference class must be tiny because nobody is doing this for deciding whether to wear bike helmets.
Suppose two people did “further multiply this value by the number of people in the reference class of those making the decision the same way”, but their decision making processes are slightly different, e.g., they use different heuristics to do things like finding sources for the numbers that go into the cost/benefit analysis, I don’t know how to figure out whether they are still in the same reference class, or how to generalize beyond “same reference class” when the agents are humans as opposed to AIs (and even with the latter we don’t have a complete mathematical theory).
I’m skeptical about this too. I’m not actually aware of a good argument for acausal coordination in the context of voting. A search on LW yields only this short comment from Eliezer.
But your decision also causes the corresponding benefits to accrue to all the people in the reference class? So the decision you make should be the same, it just becomes more consequentially important.
The voting case is different because the benefits are superlinear in the number of people you affect (at least up to a point) -- a million people voting the same way as you probably have more than a million times more chance at swinging the election.
ETA: Never mind, misunderstood habryka’s reply, I’m basically saying the same thing. Though I still think that the case for applying the argument to voting is much stronger than the case for applying it in other decisions where benefits are linear.
The absolute size of a reference class only gives the problem statement for an individual decision some altruistic/paternalistic tilt, which can fail to change it. Greater relative size of a reference class increases the decision’s relative importance compared to other decisions, which on the margin should pull some effort away from the other decisions.
That the effective multiplier due to acausal coordination is smaller for non-voting decisions doesn’t inform the question of whether the argument applies to non-voting decisions. The argument may be ignored in the decision algorithm only if the reference class is always small or about the same size for different decisions.
Yeah, I agree with all of that. (I didn’t realize the point about the relative sizes of reference classes until I read your reply to habryka more carefully.)
Perhaps another way to make the point about the argument for voting being stronger is that it affects your decisionmaking even if you are not altruistic. Here by stronger I mean that the argument is “more robust” or “less suspicious”.
Sure, for voting the effect on decision making is greater. I’m just suspicious of this whole idea of acausal impact, and moderate observations about effect size don’t help with that confusion. I don’t think it can apply to voting without applying to other things, so the quantitative distinction doesn’t point in a particular direction on correctness of the overall idea.
Sure, but doesn’t that apply mostly uniformly to all decisions, making tradeoffs mostly the same, just having everything have a larger magnitude? I don’t see why this is the kind of decision that should be more influenced by that line of reasoning than e.g. my heuristics around choosing what to do with my career, how I choose romantic partners, whether to get exercise, etc.
The magnitude depends on the sizes of reference classes, which differ dramatically. So some personal decisions are suddenly much more important than others simply because more people make them, and so you should allocate more resources on deciding those things in particular correctly. Exercise regimen seems like a high acausal impact decision. Another difference is that the goal that the personal decisions pursue shifts from what you want to happen to yourself, to what you want to happen to other people, and this effect increases with population. (Edited the grandparent to express these points more clearly.)
Hmm, I think that’s being too shallow. My decisions aren’t made in isolation, they are the result of applying general principles and broad heuristics. The biggest impact should come other people adopting the same heuristics and principles, not just making the same object level decisions.
That influences sizes of reference classes, but at some point the sizes cash out in morally relevant object level decisions.
But if you spend more time thinking about exercise, that time cost is multiplied greatly. I think this kind of countereffect cancels out every practical argument of this type.
New information argues for a change on the margin, so the new equilibrium is different, though it may not be far away. The arguments are not “cancelled out”, but they do only have bounded impact. Compare with charity evaluation in effective altruism: if we take the impact of certain decisions as sufficiently significant, it calls for their organized study, so that the decisions are no longer made based on first impressions. On the other hand, if there is already enough infrastructure for making good decisions of that type, then significant changes are unnecessary.
In the case of acausal impact, large reference classes imply that at least that many people are already affected, so if organized evaluation of such decisions is feasible to set up, it’s probably already in place without any need for the acausal impact argument. So actual changes are probably in how you pay attention to info that’s already available, not in creating infrastructure for generating better info. On the other hand, a source of info about sizes of reference classes may be useful.