I agree that calculating the precise value of the ‘utility function’ is computationally unfeasible, but this does not mean it can’t be approximated, or that any attempt to reason about acausal things is necessarily futile. I think your argument proves too much; it could be used to justify rejection of any Timeless decision theory, or even perhaps utilitarianism, because precisely evaluating a utility function, especially if it involves acausal’ influence’ , is combinatorially explosive. Although I don’t understand it in depth, I have heard that it is possible to approximate infinite integrals over possible sequences of events involving particles following different paths in Quantum Field Theory, and that this often yeilds useful approximations to reality even when inifnity enter into the calculation. In my opinion, a similar process is likely to be possible in a functional/logical/mathematical decision theory.
It’s not just combinatorial explosion; it’s also chaos. How do you get an FMG? Write a blog-post story of a god; figure out what that god would want you to do; then do that. But two stories that are nearby in story-space can generate action recommendations that are wildly different or even opposed. The parts of FMG-space that deviate from conventional ethics & epistemology offer no guidance because they diverge into chaos.
“The parts of FMG-space that deviate from conventional ethics & epistemology offer no guidance because they diverge into chaos.” Wouldn’t that suggest that logical decision theories give us almost no new knowledge? How do you justify this claim?
No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story. They’re still fine for real world situations like keeping your promises to other people.
What you’re saying reminds me a lot of another LessWrong user I conversed with on this topic, who claimed that Acausal communication couldn’t possibly work, but I have to disagree: just because information, as in, data, isn’t transferred in the traditional way via causal channels between a future ASI and a current human, does not imply that acausal trade/blackmail can never work in principle, because they don’t work by causal means.
“No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story.” I feel your exagerration of what I claimed is reaching a point of departure from representing it well enough to be interchangeable for the purpose of this discussion. I didn’t claim perfect knowledge of an ASI’s mind (and it wouldn’t exactly be a god) .
“They’re still fine for real world situations like keeping your promises to other people.”
Your use of the phrase “real world situations” suggests that you’ve presupposed that this kind of thing can’t happen… but I don’t see why it can’t.
I should also mention that the basilisk doesn’t need to be vengeful; to assume that would be to misunderstand the threat it represents. In the version I’m thinking about, the basilisk views itself as logically compelled to follow through on its threat.
I’m not sure that applies to Roko’s basilisk; as I’ve mentioned elsewhere, there are particular reasons to think it would be more likely to want some things than others. Yes, maybe there’s an element of chaos, but that doesn’t prevent there being a rational way to act in response to the possibility of acausal blackmail. And maybe that way is to give in. Can you see a good reason why it isn’t? A reason robust to descending a long way into the logical mire surrounding the thought experiment?
I agree that calculating the precise value of the ‘utility function’ is computationally unfeasible, but this does not mean it can’t be approximated, or that any attempt to reason about acausal things is necessarily futile. I think your argument proves too much; it could be used to justify rejection of any Timeless decision theory, or even perhaps utilitarianism, because precisely evaluating a utility function, especially if it involves acausal’ influence’ , is combinatorially explosive. Although I don’t understand it in depth, I have heard that it is possible to approximate infinite integrals over possible sequences of events involving particles following different paths in Quantum Field Theory, and that this often yeilds useful approximations to reality even when inifnity enter into the calculation. In my opinion, a similar process is likely to be possible in a functional/logical/mathematical decision theory.
It’s not just combinatorial explosion; it’s also chaos. How do you get an FMG? Write a blog-post story of a god; figure out what that god would want you to do; then do that. But two stories that are nearby in story-space can generate action recommendations that are wildly different or even opposed. The parts of FMG-space that deviate from conventional ethics & epistemology offer no guidance because they diverge into chaos.
“The parts of FMG-space that deviate from conventional ethics & epistemology offer no guidance because they diverge into chaos.” Wouldn’t that suggest that logical decision theories give us almost no new knowledge? How do you justify this claim?
No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story. They’re still fine for real world situations like keeping your promises to other people.
What you’re saying reminds me a lot of another LessWrong user I conversed with on this topic, who claimed that Acausal communication couldn’t possibly work, but I have to disagree: just because information, as in, data, isn’t transferred in the traditional way via causal channels between a future ASI and a current human, does not imply that acausal trade/blackmail can never work in principle, because they don’t work by causal means.
“No, decision theories just don’t give us free a-priori perfect knowledge of the precise will of a vengeful & intolerant god we just made up for a story.” I feel your exagerration of what I claimed is reaching a point of departure from representing it well enough to be interchangeable for the purpose of this discussion. I didn’t claim perfect knowledge of an ASI’s mind (and it wouldn’t exactly be a god) .
“They’re still fine for real world situations like keeping your promises to other people.”
Your use of the phrase “real world situations” suggests that you’ve presupposed that this kind of thing can’t happen… but I don’t see why it can’t.
I should also mention that the basilisk doesn’t need to be vengeful; to assume that would be to misunderstand the threat it represents. In the version I’m thinking about, the basilisk views itself as logically compelled to follow through on its threat.
I’m not sure that applies to Roko’s basilisk; as I’ve mentioned elsewhere, there are particular reasons to think it would be more likely to want some things than others. Yes, maybe there’s an element of chaos, but that doesn’t prevent there being a rational way to act in response to the possibility of acausal blackmail. And maybe that way is to give in. Can you see a good reason why it isn’t? A reason robust to descending a long way into the logical mire surrounding the thought experiment?