Interesting take, but I’ll note that these are not acausal, just indirect-causal. Voting is a good example—counts are public, so future voters KNOW how many of their fellow citizens take it seriously enough to participate.
In all of these examples, there is a signaling path to future impact. Which humans are perhaps over-evolved to focus on.
Right. Nothing that happens in the same Hubble volume can really be said to not be causally connected. Nonetheless I like the point of the OP even if it’s made in an imprecise way.
Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:
There isn’t such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
Counts being public tells me how many people voted, but there’s something a little funny there. There’s almost no signal from my vote in there (concretely, I don’t think my vote changes the number from one that tells other people “voting isn’t worth it” to “voting is worth it”). I notice I’m confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?
I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
Interesting take, but I’ll note that these are not acausal, just indirect-causal. Voting is a good example—counts are public, so future voters KNOW how many of their fellow citizens take it seriously enough to participate.
In all of these examples, there is a signaling path to future impact. Which humans are perhaps over-evolved to focus on.
Right. Nothing that happens in the same Hubble volume can really be said to not be causally connected. Nonetheless I like the point of the OP even if it’s made in an imprecise way.
Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:
There isn’t such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
Counts being public tells me how many people voted, but there’s something a little funny there. There’s almost no signal from my vote in there (concretely, I don’t think my vote changes the number from one that tells other people “voting isn’t worth it” to “voting is worth it”). I notice I’m confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?
I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
I guess I’m just not following what the causal reasons are here?
If the counts weren’t public until after voting were closed, do you think people would vote significantly differently?
My instinct says they wouldn’t.