This seems too strong; we can still reasonably talk about deception in terms of a background of signals in the same code. The actual situation is more like, there’s lots of agents. Most of them use this coding in correspondence-y way (or if “correspondence” assumes too much, just, most of the agents use the coding in a particular way such that a listener who makes a certain stereotyped use of those signals (e.g. what is called “takes them to represent reality”) will be systematically helped). Some agents instead use the channel to manipulate actions, which jumps out against this background as causing the stereotyped use to not achieve its usual performance (which is different from, the highly noticeable direct consequence of the signal (e.g., not wearing a mask) was good or bad, or the overall effect was net good or bad). Since the deceptive agents are not easily distinguishable from the non-deceptive agents, the deception somewhat works, rather than you just ignoring them or biting a bullet like “okay sure, they’ll deceive me sometimes, but the net value of believing them is still higher than not, no problem!”. That’s why there’s tension; you’re so close to having a propositional protocol—it works with most agents, and if you could just do the last step of filtering out the deceivers, it’d have only misinformation, no disinformation—but you can’t trivially do that filtering, so the deceivers are parasitic on the non-deceivers’s network. And you’re forced to either be mislead constantly; or else downgrade your confidence in the whole network, throwing away lots of the value of the messages from non-deceivers; or, do the more expensive work of filtering adversaries.
This seems too strong; we can still reasonably talk about deception in terms of a background of signals in the same code. The actual situation is more like, there’s lots of agents. Most of them use this coding in correspondence-y way (or if “correspondence” assumes too much, just, most of the agents use the coding in a particular way such that a listener who makes a certain stereotyped use of those signals (e.g. what is called “takes them to represent reality”) will be systematically helped). Some agents instead use the channel to manipulate actions, which jumps out against this background as causing the stereotyped use to not achieve its usual performance (which is different from, the highly noticeable direct consequence of the signal (e.g., not wearing a mask) was good or bad, or the overall effect was net good or bad). Since the deceptive agents are not easily distinguishable from the non-deceptive agents, the deception somewhat works, rather than you just ignoring them or biting a bullet like “okay sure, they’ll deceive me sometimes, but the net value of believing them is still higher than not, no problem!”. That’s why there’s tension; you’re so close to having a propositional protocol—it works with most agents, and if you could just do the last step of filtering out the deceivers, it’d have only misinformation, no disinformation—but you can’t trivially do that filtering, so the deceivers are parasitic on the non-deceivers’s network. And you’re forced to either be mislead constantly; or else downgrade your confidence in the whole network, throwing away lots of the value of the messages from non-deceivers; or, do the more expensive work of filtering adversaries.