Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.
Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.
1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?
2. Are these civilizations we haven’t met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I’ve read seems to imply the latter in which case...
2a. If I don’t care what civilizations do (to include “simulating” me) unless it’s possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?
3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?
4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate?
I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it’s worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it’s about, it may help both of us become better at explaining it to a wider audience.
I went and read the background material on acausal trade and narrowed even further where it is I’m confused. It’s this paragraph:
> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.
My problem is lack of evidence that genuine caring about entities with which one can never interact really is “quite common even for humans today”, after factoring out indirect benefits/costs and social signalling.
How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?
Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?
Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.
1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?
2. Are these civilizations we haven’t met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I’ve read seems to imply the latter in which case...
2a. If I don’t care what civilizations do (to include “simulating” me) unless it’s possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?
3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?
4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate?
I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it’s worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it’s about, it may help both of us become better at explaining it to a wider audience.
Update:
I went and read the background material on acausal trade and narrowed even further where it is I’m confused. It’s this paragraph:
> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.
My problem is lack of evidence that genuine caring about entities with which one can never interact really is “quite common even for humans today”, after factoring out indirect benefits/costs and social signalling.
How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?
Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?