Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:
There isn’t such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
Counts being public tells me how many people voted, but there’s something a little funny there. There’s almost no signal from my vote in there (concretely, I don’t think my vote changes the number from one that tells other people “voting isn’t worth it” to “voting is worth it”). I notice I’m confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?
I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:
There isn’t such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
Counts being public tells me how many people voted, but there’s something a little funny there. There’s almost no signal from my vote in there (concretely, I don’t think my vote changes the number from one that tells other people “voting isn’t worth it” to “voting is worth it”). I notice I’m confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?
I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
I guess I’m just not following what the causal reasons are here?