I think you’re making a major false generalization from Newcombe’s problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term “acausal” for this. A future superintelligence in the same universe is linked causally to you.
Gems from the Wiki: Acausal Trade : “In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb’s problem.) ”
It seems like you’re using the term in a way which describes an inherently useless process. This is not the way it tends to be used on this website.
Whether you think the word ‘acausal’ is appropriate or not, it can’t be denied that it works in scenarios like Newcomb’s problem.
“Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular. ” Causally, yes, this is what happens. But in order to reason your way through the scenario in a way which results in you leaving with a significant profit, you need to take the possibility that you are being simulated into account. In a more abstract way, I maintain that it’s accurate to think of the information as flowing from the mind, which is a platonic object, into both physical instantiations of itself (inside Omega and inside the human) . This is similar to how mathematical theorems control physics at many different times and places, through the laws of physics which are formulated within a mathematical framework to which the theorems apply. This is not exactly causal influence, but I’d think you’d agree it’s important.
“A future superintelligence in the same universe is linked causally to you.” The term ‘acausal’ doesn’t literally mean ‘absent any causality’ , it means something more like ‘through means which are not only causal, or best thought of in terms of logical connections between things rather than/as well as causal ones ’ , or at least, that’s how I’m using the term.
It’s also how many people on Lesswrong using it in the context of the prisoners’ dilemma, Newcomb’s problem, Parfit’s Hitchhicker, or almost any other scenario in which it’s invoked use it. In all of these scenarios there is an element of causality.
Given that there is an element of causality, how do you see the basilisk as less likely to ‘work’ ?
I think you’re making a major false generalization from Newcombe’s problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term “acausal” for this. A future superintelligence in the same universe is linked causally to you.
“Newcombe’s problem, which is not acausal.”
What do you mean by the word acausal?
Gems from the Wiki: Acausal Trade : “In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb’s problem.) ”
It seems like you’re using the term in a way which describes an inherently useless process. This is not the way it tends to be used on this website.
Whether you think the word ‘acausal’ is appropriate or not, it can’t be denied that it works in scenarios like Newcomb’s problem.
“Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular. ” Causally, yes, this is what happens. But in order to reason your way through the scenario in a way which results in you leaving with a significant profit, you need to take the possibility that you are being simulated into account. In a more abstract way, I maintain that it’s accurate to think of the information as flowing from the mind, which is a platonic object, into both physical instantiations of itself (inside Omega and inside the human) . This is similar to how mathematical theorems control physics at many different times and places, through the laws of physics which are formulated within a mathematical framework to which the theorems apply. This is not exactly causal influence, but I’d think you’d agree it’s important.
“A future superintelligence in the same universe is linked causally to you.” The term ‘acausal’ doesn’t literally mean ‘absent any causality’ , it means something more like ‘through means which are not only causal, or best thought of in terms of logical connections between things rather than/as well as causal ones ’ , or at least, that’s how I’m using the term.
It’s also how many people on Lesswrong using it in the context of the prisoners’ dilemma, Newcomb’s problem, Parfit’s Hitchhicker, or almost any other scenario in which it’s invoked use it. In all of these scenarios there is an element of causality.
Given that there is an element of causality, how do you see the basilisk as less likely to ‘work’ ?