“because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you.” Please can you elaborate? Which definition are you using?
“If you do a fairly simple expected-value calculation of the gains-of-trade here even with modest numbers like 10^100 for the size of the hypothesis spaces on both sides (more realistic values are more like 10^10^20), you get results that are so close to zero that even spending one attojoule of thought on it has already lost you more than you can possibly gain in expected value.” How are you assigning the expected value? I don’t understand basically any of this paragraph; if you’re simply counting all ‘possible trades/possible worlds’, weighting them equally by probability and assuming that expected value is evenly distributed then I have to say I think that this is overly simplistic.
“imagine that there’s a paperclip maximizer that perfectly simulates you” That’s not exactly what I’m worried about… to begin with, the simulation doesn’t need to be perfect. And the basilisk isn’t necessarily a paperclip maximizer.
are worthless, because both you and it are utterly insignificant specks in each other’s hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison.” Why? I would really rather not believe this particular hypothesis!
If you think all possible acausal trades are equally likely, I understand why you might think I’m privileging a hypothesis, but I don’t understand why you would assume they are.
“because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you.” Please can you elaborate? Which definition are you using?
Acausal means that no information can pass in either direction.
“you and it are utterly insignificant specks in each other’s hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison.” Why? I would really rather not believe this particular hypothesis!
That part isn’t a hypothesis, it’s a fact based on the premise. Acausality means that the simulation-god you’re thinking of can’t know anything about you. They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings? You do not even occupy 10^-100 of that space, and more likely less than 10^-10^20 of it.
“Acausal means that no information can pass in either direction.” If you define information passing in a purely causal way from one instance of a mind at one time to another at a different time, then yes, you’re trivially correct. However, whichever definition you use, it remains the case that minds operating under something like a TDT outperform others, for example in Newcomb’s problem. Would you two-box? Certainly no information causally propagated from your mind instantiated in your brain at the point of making the decision back to Omega in the past. However, in my opinion, it makes sense to think of yourself as a mind which runs both on Omega’s simulacrum, and the physical brain, or at least as one that isn’t sure which one it is. If you realize this, then it makes sense to make your decision as though you might be the simulation, so really it’s not that information travels backwards in time, but rather that it moves in a more abstract way from your mind to both instances of that mind in the physical world. Whether you want to call this information transfer is a matter of semantics, but if you decide to use a definition which precludes information transfer, note that it doesn’t preclude any of the phenomena which LessWrong users call ‘acausal’, like TDT agents ‘winning’ Newcomb’s problem.
“That part isn’t a hypothesis, it’s a fact based on the premise. Acausality means that the simulation-god you’re thinking of can’t know anything about you.” I wouldn’t call it a premise at all. The premise is that there is (probably) an ASI at some point in the future, and that it wants to maximize the number of possible worlds in which it exists, all else being equal. It seems to be the case that acausal extortion would be one way to help it achieve this.
“They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings?” Firstly, I occupy the same physical universe, and in fact the same planet! Secondly, it could well be that, for the purpose of this ‘trade’, most humans thinking about the basilisk count as equivalent, or maybe only those who’ve thought about it in enough detail. I don’t know whether I have done that, and of course I hope I have not, but I am not sure at the moment. It seems quite likely that a SAI would at least think about humans thinking about it. The basilisk seems to be a possible next step from there, and of course a superintelligent AI would have enough intelligence to easily determine whether the situation could actually work out in its favour.
I think you’re making a major false generalization from Newcombe’s problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term “acausal” for this. A future superintelligence in the same universe is linked causally to you.
Gems from the Wiki: Acausal Trade : “In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb’s problem.) ”
It seems like you’re using the term in a way which describes an inherently useless process. This is not the way it tends to be used on this website.
Whether you think the word ‘acausal’ is appropriate or not, it can’t be denied that it works in scenarios like Newcomb’s problem.
“Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular. ” Causally, yes, this is what happens. But in order to reason your way through the scenario in a way which results in you leaving with a significant profit, you need to take the possibility that you are being simulated into account. In a more abstract way, I maintain that it’s accurate to think of the information as flowing from the mind, which is a platonic object, into both physical instantiations of itself (inside Omega and inside the human) . This is similar to how mathematical theorems control physics at many different times and places, through the laws of physics which are formulated within a mathematical framework to which the theorems apply. This is not exactly causal influence, but I’d think you’d agree it’s important.
“A future superintelligence in the same universe is linked causally to you.” The term ‘acausal’ doesn’t literally mean ‘absent any causality’ , it means something more like ‘through means which are not only causal, or best thought of in terms of logical connections between things rather than/as well as causal ones ’ , or at least, that’s how I’m using the term.
It’s also how many people on Lesswrong using it in the context of the prisoners’ dilemma, Newcomb’s problem, Parfit’s Hitchhicker, or almost any other scenario in which it’s invoked use it. In all of these scenarios there is an element of causality.
Given that there is an element of causality, how do you see the basilisk as less likely to ‘work’ ?
“because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you.” Please can you elaborate? Which definition are you using?
“If you do a fairly simple expected-value calculation of the gains-of-trade here even with modest numbers like 10^100 for the size of the hypothesis spaces on both sides (more realistic values are more like 10^10^20), you get results that are so close to zero that even spending one attojoule of thought on it has already lost you more than you can possibly gain in expected value.” How are you assigning the expected value? I don’t understand basically any of this paragraph; if you’re simply counting all ‘possible trades/possible worlds’, weighting them equally by probability and assuming that expected value is evenly distributed then I have to say I think that this is overly simplistic.
“imagine that there’s a paperclip maximizer that perfectly simulates you” That’s not exactly what I’m worried about… to begin with, the simulation doesn’t need to be perfect. And the basilisk isn’t necessarily a paperclip maximizer.
are worthless, because both you and it are utterly insignificant specks in each other’s hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison.” Why? I would really rather not believe this particular hypothesis!
If you think all possible acausal trades are equally likely, I understand why you might think I’m privileging a hypothesis, but I don’t understand why you would assume they are.
Acausal means that no information can pass in either direction.
That part isn’t a hypothesis, it’s a fact based on the premise. Acausality means that the simulation-god you’re thinking of can’t know anything about you. They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings? You do not even occupy 10^-100 of that space, and more likely less than 10^-10^20 of it.
“Acausal means that no information can pass in either direction.” If you define information passing in a purely causal way from one instance of a mind at one time to another at a different time, then yes, you’re trivially correct. However, whichever definition you use, it remains the case that minds operating under something like a TDT outperform others, for example in Newcomb’s problem. Would you two-box? Certainly no information causally propagated from your mind instantiated in your brain at the point of making the decision back to Omega in the past. However, in my opinion, it makes sense to think of yourself as a mind which runs both on Omega’s simulacrum, and the physical brain, or at least as one that isn’t sure which one it is. If you realize this, then it makes sense to make your decision as though you might be the simulation, so really it’s not that information travels backwards in time, but rather that it moves in a more abstract way from your mind to both instances of that mind in the physical world. Whether you want to call this information transfer is a matter of semantics, but if you decide to use a definition which precludes information transfer, note that it doesn’t preclude any of the phenomena which LessWrong users call ‘acausal’, like TDT agents ‘winning’ Newcomb’s problem.
“That part isn’t a hypothesis, it’s a fact based on the premise. Acausality means that the simulation-god you’re thinking of can’t know anything about you.” I wouldn’t call it a premise at all. The premise is that there is (probably) an ASI at some point in the future, and that it wants to maximize the number of possible worlds in which it exists, all else being equal. It seems to be the case that acausal extortion would be one way to help it achieve this.
“They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings?” Firstly, I occupy the same physical universe, and in fact the same planet! Secondly, it could well be that, for the purpose of this ‘trade’, most humans thinking about the basilisk count as equivalent, or maybe only those who’ve thought about it in enough detail. I don’t know whether I have done that, and of course I hope I have not, but I am not sure at the moment. It seems quite likely that a SAI would at least think about humans thinking about it. The basilisk seems to be a possible next step from there, and of course a superintelligent AI would have enough intelligence to easily determine whether the situation could actually work out in its favour.
I think you’re making a major false generalization from Newcombe’s problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term “acausal” for this. A future superintelligence in the same universe is linked causally to you.
“Newcombe’s problem, which is not acausal.”
What do you mean by the word acausal?
Gems from the Wiki: Acausal Trade : “In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb’s problem.) ”
It seems like you’re using the term in a way which describes an inherently useless process. This is not the way it tends to be used on this website.
Whether you think the word ‘acausal’ is appropriate or not, it can’t be denied that it works in scenarios like Newcomb’s problem.
“Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular. ” Causally, yes, this is what happens. But in order to reason your way through the scenario in a way which results in you leaving with a significant profit, you need to take the possibility that you are being simulated into account. In a more abstract way, I maintain that it’s accurate to think of the information as flowing from the mind, which is a platonic object, into both physical instantiations of itself (inside Omega and inside the human) . This is similar to how mathematical theorems control physics at many different times and places, through the laws of physics which are formulated within a mathematical framework to which the theorems apply. This is not exactly causal influence, but I’d think you’d agree it’s important.
“A future superintelligence in the same universe is linked causally to you.” The term ‘acausal’ doesn’t literally mean ‘absent any causality’ , it means something more like ‘through means which are not only causal, or best thought of in terms of logical connections between things rather than/as well as causal ones ’ , or at least, that’s how I’m using the term.
It’s also how many people on Lesswrong using it in the context of the prisoners’ dilemma, Newcomb’s problem, Parfit’s Hitchhicker, or almost any other scenario in which it’s invoked use it. In all of these scenarios there is an element of causality.
Given that there is an element of causality, how do you see the basilisk as less likely to ‘work’ ?