It feels to me like people in our community aren’t being skeptical enough or pushing back enough on the idea of acausal coordination for humans. I’m kind of confused about this because it seems like a weirder idea and has less good arguments for it than for example the importance of AI risk which does get substantial skepticism and push back.
In an old post I argued that for acausal coordination reasons it seems as if you should further multiply this value by the number of people in the reference class of those making the decision the same way (discounted by how little you care about strangers vs. yourself).
But if “the same way” includes not only the same kind of explicit cost/benefit analysis but also “further multiply this value by the number of people in the reference class of those making the decision the same way”, the number of people in this reference class must be tiny because nobody is doing this for deciding whether to wear bike helmets.
Suppose two people did “further multiply this value by the number of people in the reference class of those making the decision the same way”, but their decision making processes are slightly different, e.g., they use different heuristics to do things like finding sources for the numbers that go into the cost/benefit analysis, I don’t know how to figure out whether they are still in the same reference class, or how to generalize beyond “same reference class” when the agents are humans as opposed to AIs (and even with the latter we don’t have a complete mathematical theory).
people talk about this argument mostly in the context of voting
I’m skeptical about this too. I’m not actually aware of a good argument for acausal coordination in the context of voting. A search on LW yields only this short comment from Eliezer.
It feels to me like people in our community aren’t being skeptical enough or pushing back enough on the idea of acausal coordination for humans. I’m kind of confused about this because it seems like a weirder idea and has less good arguments for it than for example the importance of AI risk which does get substantial skepticism and push back.
But if “the same way” includes not only the same kind of explicit cost/benefit analysis but also “further multiply this value by the number of people in the reference class of those making the decision the same way”, the number of people in this reference class must be tiny because nobody is doing this for deciding whether to wear bike helmets.
Suppose two people did “further multiply this value by the number of people in the reference class of those making the decision the same way”, but their decision making processes are slightly different, e.g., they use different heuristics to do things like finding sources for the numbers that go into the cost/benefit analysis, I don’t know how to figure out whether they are still in the same reference class, or how to generalize beyond “same reference class” when the agents are humans as opposed to AIs (and even with the latter we don’t have a complete mathematical theory).
I’m skeptical about this too. I’m not actually aware of a good argument for acausal coordination in the context of voting. A search on LW yields only this short comment from Eliezer.