Information, Distinguishability, and Causality

I received a small grant from the ACX Grants, allocated by the Long-Term Future Fund to write about Constructor Theory, the kind of problems that it attempts to solve, and whether it can give any insights into the AI alignment problem. This post is based on the paper the paper Constructor Theory of Information by David Deutsch and Chiara Marletto as well as the paper Constructor Theory by Deutsch and The Science of Can and Can’t, Marletto’s popular book on this subject.

This post is a sequel to Information is a Counterfactual Property . This post will make more sense if you read that one first.

Also, the previous post was my first post on LessWrong and I have several more posts planned, so I would appreciate any feedback that will help me improve what I write in the future!

Part 1: Distinguishability

In the previous post, we considered a scenario where you were using a lamp to send a signal through the fog to a friend. Let us now consider an abstracted version of the same scenario: Alice wishes to send a message to Bob, but she cannot interact directly with him. Instead, she interacts with a medium, which we will call M. Bob can also interact with M, so if Alice can change M, and Bob can observe the change, she might be able to use M to transmit information. To map this on to the example in the previous post, replace ‘Alice’ with ‘you’, ‘Bob’ with ‘your friend’ and ‘M’ with ‘the electromagnetic field’.

In what circumstances can we say that M transmits information? To start with, M must have the counterfactual property that it can be in at least two different states (and that Alice somehow has reliable control over which of these states it is in). For now, we will use the simplification that M only has two possible states. This is one of the counterfactual properties of information that we found in the previous post.

Obviously, to transmit information to Bob, M must interact with Bob in some way. For information to be transmitted, we would require that, after the interaction, Bob’s state would end up being different, depending on the state of M (ie. depending on what signal was sent). For example, suppose Alice was signalling with a lamp to Bob, but the fog between them was too thick for Bob to tell the difference between when the lamp was on and when the lamp was off. In this case, Bob’s state would be the same regardless of the state of M and information would not be successfully transmitted.

However, this criterion is not entirely satisfactory. Consider the case where the fog is thin enough for the lamp light to penetrate, but Bob is wearing a blindfold. In this case, information is not transmitted to Bob, but one could argue that his ‘state’ still changes depending on the signal sent. After all, if the light illuminates his coat, causing excitation of the atoms which comprise it, his state has changed, compared to the state where the light does not illuminate his coat. But this change is not sufficient to say that information has been transmitted.

In order for information transmission, not only does Bob’s state need to change, he must also be able to distinguish between the two possible states that are sent to him. One way that this can be framed is by saying that Bob must be able to re-transmit the information he has received. For example, Bob could then interact with another medium M’ and use it to re-transmit the information he received to a third party, Charlie. It is interesting to note that this, is also a counterfactual statement. Bob does not have to re-transmit the signal for it to consist of information, but it must be possible for him to do so. To distinguish between two possible signals and re-transmit is equivalent to being able to reliably copy the state of M into some corresponding internal state of Bob and then again into some other medium M’. But this is not enough: to successfully re-transmit the information, the message that Bob encodes in M’ must also be distinguishable by Charlie. Unfortunately, this definition is recursive: whether or not Bob can be said to have distinguished the signal is dependent on whether Charlie can distinguish it.

This property of distinguishability can be framed operationally in a less anthropocentric way. Suppose we send one of two messages, which we will label or . The message will be received by a ‘receiver’. In order for and to be distinguishable by the receiver, the state of the receiver must change from its initial receptive ‘blank’ state (call it ), to a state when the message is and to state , when the message is . If we describe the state where the message is and the receiver is in state , as , we require the following transformation to occur if the message is :

Similarly, if the message is , we require the transformation:

But if, for all practical purposes, the receiver states and result in the same physical behaviour of the receiver, then the receiver cannot be said to have distinguished the states. Instead, we must specify that and themselves are distinguishable. It is here that our definition of distinguishability again becomes recursive.

In practice, this recursive definition does not prevent people from sending signals to each other. For practical purposes, we can define distinguishability in an ‘I know it when I see it’ kind of way. For example, we could show Bob various pairs of signals and ask if he could distinguish them. This doesn’t really solve the problem, but provided we understood Bob well enough, we might be willing to take it as given that he can distinguish the signals. If we are interested in having a theory of information in physics, then this kind of ad hoc definition is a bit unsatisfactory.

Part 2: Causality

Setting aside (though not ignoring) the problem of recursive definition of distinguishability for a while, we can also approach information from the point of view of causality. In the previous post, we discussed the fact that in order to send a message, it must be possible to change the state of the medium. Furthermore, it is not enough that the state of the medium can change. It must also be possible for change it at will (imagine trying to send Morse code with a lamp which is flickering on and off out of your control). Going back to our framing of Alice, Bob and a Medium, this condition can be interpreted as saying that Alice must have some kind of causal effect on the Medium.

Similarly, our condition that the signal must be distinguishable can be framed as the requirement that the Medium must have a causal effect on Bob. If Bob is unable to distinguish the possible states of the Medium, he cannot be said to have received the signal, and the state of M will not affect his behaviour. We can write this setup as a causal diagram (where A represents Alice, B represents Bob and M represents the Medium):

Here, an arrow indicates that something has a causal effect on something else (not to be confused with earlier, when we used arrows to indicate transformations). So this diagram can be read ‘A has a causal effect on M, which has a causal effect on B’. The overall effect of this is that A has a causal effect on B, through the medium M. This happens even though A has no direct causal effect on B (ie. there is no arrow going directly from A to B). Notice that, if our first condition is not met, and Alice cannot perform the task of changing the medium, then the causal link between A and M in this diagram is severed. This also severs the indirect link between A and B. Similarly, if our second condition is not met, and Bob cannot distinguish the different states of M, this severs the link between M and B, again severing the overall link between A and B.

This link between information and causality is not that surprising or deep. When we naively think of sending information to someone, we normally imagine it having some causal effect on them (think of sending military orders to influence the position of an army, or instructions to a friend to influence where they meet you). The framing of information in terms of causality further highlights the counterfactual nature of information, as causality and counterfactuals are inextricably linked. After all, the causal claim ‘X caused Y’ and the counterfactual statement ‘if X had not happened, Y would not have happened’ are very closely related. In The Book of Why (excerpt here), Judea Pearl places counterfactual statements at the top of his ‘ladder of causality’, claiming that counterfactual statements are the most powerful kind of statements about causality one can make. Similarly, while he disagrees with Pearl’s characterisation of counterfactuals, Tim Maudlin, in his review of The Book of Why agrees that counterfactuals are ‘closely entwined’ with causality.

Conclusion

In this post and the previous one, we have explored some interesting properties of information. First, we found that information is, in an important sense, a counterfactual property, which depends as much on what could be made to happen in a system as it depends on what actually does happen. In particular, it must be possible to change the state of the medium carrying information. Second, in this post, I discussed the fact information is closely tied up with the concept of distinguishability, but attempting to define distinguishability leads one to a recursive definition. This is a problem which we will later attack using constructor theory. Finally, we briefly discussed how these two counterfactual properties of information can be expressed as links in a causal chain. We saw that information transfer through a medium can lead to a causal link between two parties even if they do not share a direct causal link between one another

Up until now, we have just dealt with classical information and readers familiar with quantum mechanics will wonder how these principles apply to quantum information. I would like to address this in a future post, but for the next couple of posts, I will back up and write about the bigger picture. The next post will be about the use of counterfactuals in physics.

No comments.