I’m glad you’re bringing sender-receiver lit into this discussion! It’s been useful for me to ground parts of my thinking. What follows is almost-a-post’s worth of, “Yes, and also...”
Stable “Deception” Equilibrium
The firefly example showed how an existing signalling equilibrium can be hijacked by a predator. What once was a reliable signal becomes unreliable. As you let things settle into equilibrium, the signal of seeing a light should lose all informational content (or at least, it should not give any new information about whether or not the signal is coming from mate or predator).
Part of the what ensures this result is the totally opposed payoffs of P.rey and P.redator. In any signalling game where the payouts are zero-sum there isn’t going to be an equilibrium where the signals conveys information.
More complex varied payouts can have more interesting results:
Again, at the level of the sender-receiver game this is deception, but it still feels a good bit different from what I intuitively track as deception. This might be best stated as an example of “equilibrium of ambiguous communication as a result of semi-adversarial payouts”
Intention
I would not speculate on the mental life of bees; to talk of the mental life of bacteria seems absurd; and yet signalling plays a vital biological role in both cases. -Skyrms
I want to emphasize that the sender-receiver model and Skyrms’ use of “informational content” are not meant to provide an explanation of intention. Information is meant to be more basic than intent, and present in cases (like bacteria) where there seems to be no intent. Skyrms seems to be responding to some scholars who want to say “intent is what defines communication!”, and like Skyrms, I’m happy to say that communication and signals seems to cover a broad class of phenomena, of which intent would be a super-specialized subset.
For my two-cents, I think that intent in human communication involves both goal-directedness and having a model of the signalling equilibrium that can be plugged into an abstract reasoning system.
In sender-receiver games, the learning of signalling strategy often happens either through replicator-dynamics or a very simple Roth-Erev reinforcement learning. These are simple mechanisms that act quite directly and don’t afford any reflection on the mechanism itself. Humans can not only reliably send a signal in the presence of certain stimulus, but can also do “I’m bored, I know that if I shout ‘FIRE!’ Sarah is gonna jump out of her skin, and then I’ll laugh at her being surprised.” Another fun example is that seems to rely on being able to reason about the signalling equilibrium itself is “what would I have to text you to covertly convey I’ve been kidnapped?”
I think human communication is always a mix of intentional and non-intentional communication, as I explore in another post. When it comes to deception, while a lot of people seem to want to use intention to draw the boundary between “should punish” and “shouldn’t punish”, is see it more as a question of “what sort of optimization system is working against me?” I’m tempted to say “intentional deception is more dangerous because that means the full force of their intellect is being used to deceive you, as opposed to just their unconscious” but that wouldn’t be quite right. I’m still developing thoughts on this.
Far from equilibrium
I expect it’s most fruitful to think of human communication as an open system that’s far from equilibrium, most of the time. Thinking of equilibrium helps me think of directions things might move, but I don’t expect everyone’s behavior to be “priced into” most environments.
I’m glad you’re bringing sender-receiver lit into this discussion! It’s been useful for me to ground parts of my thinking. What follows is almost-a-post’s worth of, “Yes, and also...”
Stable “Deception” Equilibrium
The firefly example showed how an existing signalling equilibrium can be hijacked by a predator. What once was a reliable signal becomes unreliable. As you let things settle into equilibrium, the signal of seeing a light should lose all informational content (or at least, it should not give any new information about whether or not the signal is coming from mate or predator).
Part of the what ensures this result is the totally opposed payoffs of P.rey and P.redator. In any signalling game where the payouts are zero-sum there isn’t going to be an equilibrium where the signals conveys information.
More complex varied payouts can have more interesting results:
Again, at the level of the sender-receiver game this is deception, but it still feels a good bit different from what I intuitively track as deception. This might be best stated as an example of “equilibrium of ambiguous communication as a result of semi-adversarial payouts”
Intention
I want to emphasize that the sender-receiver model and Skyrms’ use of “informational content” are not meant to provide an explanation of intention. Information is meant to be more basic than intent, and present in cases (like bacteria) where there seems to be no intent. Skyrms seems to be responding to some scholars who want to say “intent is what defines communication!”, and like Skyrms, I’m happy to say that communication and signals seems to cover a broad class of phenomena, of which intent would be a super-specialized subset.
For my two-cents, I think that intent in human communication involves both goal-directedness and having a model of the signalling equilibrium that can be plugged into an abstract reasoning system.
In sender-receiver games, the learning of signalling strategy often happens either through replicator-dynamics or a very simple Roth-Erev reinforcement learning. These are simple mechanisms that act quite directly and don’t afford any reflection on the mechanism itself. Humans can not only reliably send a signal in the presence of certain stimulus, but can also do “I’m bored, I know that if I shout ‘FIRE!’ Sarah is gonna jump out of her skin, and then I’ll laugh at her being surprised.” Another fun example is that seems to rely on being able to reason about the signalling equilibrium itself is “what would I have to text you to covertly convey I’ve been kidnapped?”
I think human communication is always a mix of intentional and non-intentional communication, as I explore in another post. When it comes to deception, while a lot of people seem to want to use intention to draw the boundary between “should punish” and “shouldn’t punish”, is see it more as a question of “what sort of optimization system is working against me?” I’m tempted to say “intentional deception is more dangerous because that means the full force of their intellect is being used to deceive you, as opposed to just their unconscious” but that wouldn’t be quite right. I’m still developing thoughts on this.
Far from equilibrium
I expect it’s most fruitful to think of human communication as an open system that’s far from equilibrium, most of the time. Thinking of equilibrium helps me think of directions things might move, but I don’t expect everyone’s behavior to be “priced into” most environments.