What you’re saying reminds me a lot of another LessWrong user I conversed with on this topic, who claimed that Acausal communication couldn’t possibly work, but I have to disagree: just because information, as in, data, isn’t transferred in the traditional way via causal channels between a future ASI and a current human, does not imply that acausal trade/blackmail can never work in principle, because they don’t work by causal means.
“No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story.” I feel your exagerration of what I claimed is reaching a point of departure from representing it well enough to be interchangeable for the purpose of this discussion. I didn’t claim perfect knowledge of an ASI’s mind (and it wouldn’t exactly be a god) .
“They’re still fine for real world situations like keeping your promises to other people.”
Your use of the phrase “real world situations” suggests that you’ve presupposed that this kind of thing can’t happen… but I don’t see why it can’t.
I should also mention that the basilisk doesn’t need to be vengeful; to assume that would be to misunderstand the threat it represents. In the version I’m thinking about, the basilisk views itself as logically compelled to follow through on its threat.
What you’re saying reminds me a lot of another LessWrong user I conversed with on this topic, who claimed that Acausal communication couldn’t possibly work, but I have to disagree: just because information, as in, data, isn’t transferred in the traditional way via causal channels between a future ASI and a current human, does not imply that acausal trade/blackmail can never work in principle, because they don’t work by causal means.
“No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story.” I feel your exagerration of what I claimed is reaching a point of departure from representing it well enough to be interchangeable for the purpose of this discussion. I didn’t claim perfect knowledge of an ASI’s mind (and it wouldn’t exactly be a god) .
“They’re still fine for real world situations like keeping your promises to other people.”
Your use of the phrase “real world situations” suggests that you’ve presupposed that this kind of thing can’t happen… but I don’t see why it can’t.
I should also mention that the basilisk doesn’t need to be vengeful; to assume that would be to misunderstand the threat it represents. In the version I’m thinking about, the basilisk views itself as logically compelled to follow through on its threat.