The post suggests that an acausal analogue of communication is possible by simulating one’s conversation partner ( not their entire universe) , or at least asks whether it is. If causality is built into your definition of communication, then it would be a contradiction in terms. However the same could be said of things like acausal trade; the idea is that acausal communication : causal communication :: acausal trade: causal trade . My definition of communication would be some more general kind of information transfer, and I am curious in what direction information can be considered to move in scenarios like the one described here (if it can at all) .
No, I understand this part. My understanding of acausal trade is that it might work precisely because it does not require communication—I can imagine a sort of bargain I might have wanted to make with beings I cannot interact with causally, imagine the sorts of commitments they would have required of me, argue that they could have been full enough of foresight to imagine the same sorts of commitments, and thus act according to commitments that I know I would have made with such beings and that I know they would have made with me. The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from “all plausible entities” to “entities who would accept the trade I want to make”. (There are some other nitpicks, I think acausal trade becomes nonsense if the causal isolation is two-way, but that doesn’t matter for this argument.)
If by communication you really mean information transfer, I think it’s fairly obvious that this isn’t possible. Say there is some proposition I’m uncertain about even upon strong reflection. How can spinning up another mind help? That I’m uncertain means I can imagine worlds containing minds resembling mine in which the proposition is either true or false. Can spinning up a mind from one of those worlds help me determine which of those types of worlds I’m in? Of course it cannot, I either sample minds according to my present understanding of the distribution of such minds and gain nothing or I sample minds according to another distribution and am predictably misled. If I spin up a mind to talk to, there is no constraint whatever on the sort of mind I will spin up, and so it’s impossible to predictably get information out of this mechanism.
What would it take to predictably spin up minds which can resolve a present state of uncertainty? Precisely a further constraint on which types of minds experience that uncertainty. That is, in principle, acausal communication can only predictably tell you things you already know, and must mislead you as often as it leads you right. A discerning being could design tests to filter the applicable ideas from the non-applicable, but only weakly slower than a similarly discerning being could without acausal communication, because we cannot get information from an acausal mechanism (indeed quantum-mechanically I think this is the definition of acausal!) This just isn’t what communication means to me, and I don’t think this is what it means to acausal trade theorists either.
There are sorts of things I can learn from simulating other minds which might know them. I don’t know the millionth digit of pi, so maybe I can spin up a mind which I know will know it. Doing this is obviously as hard as computing it directly so I don’t see why I would do this. Maybe there are lots of things I don’t know and so I want to spin up a mind which will know all of them at once. Doing this is called creating artificial intelligence and I don’t see how it’s meaningful to think of it as acausal communication with an entity from a different possible universe rather than as causal communication with an entity I’ve created in this universe. Can you describe a situation in which I might do acausal communication that is not actually causal communication, or where this framing could in principle be useful? I still feel that I might be missing something.
You say :”The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from “all plausible entities” to “entities who would accept the trade I want to make” I honestly don’t see how this is relevant to my question.
My question was not ‘is it possible to answer questions by spinning up minds who know the answers to those questions?’ , which seems to be how you’ve interpreted it. Nonetheless, that question is certainly an interesting one, and I’m not completely sure I agree with your answer to it, because of computational irreducibility. Just because you’re uncertain of something, this does not mean that you cannot increase your certainty about it without ‘causal communication with things around you’ (i.e. observation), because sometimes simply thinking in more depth about it can help you to resolve logical uncertainty. Perhaps you could do this by ‘spinning up’ a mind. (After I read further I realized you already pointed this out. Sorry about that. I was writing my reply while reading yours.) Whether this counts as acausal communication is a subtle question though, because unlike in my thought experiment, the mind you ‘spin up’ is informed by you, rather than background properties of the world( which for the sake of argument we can take to be known a priori. Alternatively, see my reply to Mitchell Porter for a contrived way you could end up in possession of this information. ).
Maybe you could spin up a mind which you have theoretical reasons to think would arise in a universe you’re interested in understanding, in which case you might want to simulate part of that universe as well. But this seems to suggest that there is no information transfer from the simulated universe to you. However, what if the simulation is simpler than the universe of which it is a simulation, in a way which can be shown not to have any effect on its outputs? Now the situation is closer to what I described in my post. I think it’s reasonable to talk about communication occurring here because you gain knowledge you didn’t have before about the other universe, by interacting with something which isn’t that universe itself. Data about that universe was already within yours, in that it would have been possible for Laplace’s demon to observe you in your universe and predict what you were going to do, and what you would observe when you simulated the other universe, but data and information are not exactly the same thing, at least in the way I’m using them here. You gain information about the other universe in this case, because you are not Laplace’s demon.
I don’t know whether I’ve given you what you were looking for here, but hopefully it clarified the disagreement. I would repeat that I think you’re certainly correct if your definition of communication includes causality. Although, another important point that comes to my mind here is that it can be difficult to define things like causality and information transfer other than in terms of the start and end points of processes and correlleations between them, which are present in this scenario.
The post suggests that an acausal analogue of communication is possible by simulating one’s conversation partner ( not their entire universe) , or at least asks whether it is. If causality is built into your definition of communication, then it would be a contradiction in terms. However the same could be said of things like acausal trade; the idea is that acausal communication : causal communication :: acausal trade: causal trade . My definition of communication would be some more general kind of information transfer, and I am curious in what direction information can be considered to move in scenarios like the one described here (if it can at all) .
No, I understand this part. My understanding of acausal trade is that it might work precisely because it does not require communication—I can imagine a sort of bargain I might have wanted to make with beings I cannot interact with causally, imagine the sorts of commitments they would have required of me, argue that they could have been full enough of foresight to imagine the same sorts of commitments, and thus act according to commitments that I know I would have made with such beings and that I know they would have made with me. The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from “all plausible entities” to “entities who would accept the trade I want to make”. (There are some other nitpicks, I think acausal trade becomes nonsense if the causal isolation is two-way, but that doesn’t matter for this argument.)
If by communication you really mean information transfer, I think it’s fairly obvious that this isn’t possible. Say there is some proposition I’m uncertain about even upon strong reflection. How can spinning up another mind help? That I’m uncertain means I can imagine worlds containing minds resembling mine in which the proposition is either true or false. Can spinning up a mind from one of those worlds help me determine which of those types of worlds I’m in? Of course it cannot, I either sample minds according to my present understanding of the distribution of such minds and gain nothing or I sample minds according to another distribution and am predictably misled. If I spin up a mind to talk to, there is no constraint whatever on the sort of mind I will spin up, and so it’s impossible to predictably get information out of this mechanism.
What would it take to predictably spin up minds which can resolve a present state of uncertainty? Precisely a further constraint on which types of minds experience that uncertainty. That is, in principle, acausal communication can only predictably tell you things you already know, and must mislead you as often as it leads you right. A discerning being could design tests to filter the applicable ideas from the non-applicable, but only weakly slower than a similarly discerning being could without acausal communication, because we cannot get information from an acausal mechanism (indeed quantum-mechanically I think this is the definition of acausal!) This just isn’t what communication means to me, and I don’t think this is what it means to acausal trade theorists either.
There are sorts of things I can learn from simulating other minds which might know them. I don’t know the millionth digit of pi, so maybe I can spin up a mind which I know will know it. Doing this is obviously as hard as computing it directly so I don’t see why I would do this. Maybe there are lots of things I don’t know and so I want to spin up a mind which will know all of them at once. Doing this is called creating artificial intelligence and I don’t see how it’s meaningful to think of it as acausal communication with an entity from a different possible universe rather than as causal communication with an entity I’ve created in this universe. Can you describe a situation in which I might do acausal communication that is not actually causal communication, or where this framing could in principle be useful? I still feel that I might be missing something.
Thanks for your engagement and in-depth reply.
You say :”The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from “all plausible entities” to “entities who would accept the trade I want to make” I honestly don’t see how this is relevant to my question.
My question was not ‘is it possible to answer questions by spinning up minds who know the answers to those questions?’ , which seems to be how you’ve interpreted it. Nonetheless, that question is certainly an interesting one, and I’m not completely sure I agree with your answer to it, because of computational irreducibility. Just because you’re uncertain of something, this does not mean that you cannot increase your certainty about it without ‘causal communication with things around you’ (i.e. observation), because sometimes simply thinking in more depth about it can help you to resolve logical uncertainty. Perhaps you could do this by ‘spinning up’ a mind. (After I read further I realized you already pointed this out. Sorry about that. I was writing my reply while reading yours.) Whether this counts as acausal communication is a subtle question though, because unlike in my thought experiment, the mind you ‘spin up’ is informed by you, rather than background properties of the world( which for the sake of argument we can take to be known a priori. Alternatively, see my reply to Mitchell Porter for a contrived way you could end up in possession of this information. ).
Maybe you could spin up a mind which you have theoretical reasons to think would arise in a universe you’re interested in understanding, in which case you might want to simulate part of that universe as well. But this seems to suggest that there is no information transfer from the simulated universe to you. However, what if the simulation is simpler than the universe of which it is a simulation, in a way which can be shown not to have any effect on its outputs? Now the situation is closer to what I described in my post. I think it’s reasonable to talk about communication occurring here because you gain knowledge you didn’t have before about the other universe, by interacting with something which isn’t that universe itself. Data about that universe was already within yours, in that it would have been possible for Laplace’s demon to observe you in your universe and predict what you were going to do, and what you would observe when you simulated the other universe, but data and information are not exactly the same thing, at least in the way I’m using them here. You gain information about the other universe in this case, because you are not Laplace’s demon.
I don’t know whether I’ve given you what you were looking for here, but hopefully it clarified the disagreement. I would repeat that I think you’re certainly correct if your definition of communication includes causality. Although, another important point that comes to my mind here is that it can be difficult to define things like causality and information transfer other than in terms of the start and end points of processes and correlleations between them, which are present in this scenario.