I mean, the analogy is “bad” insofar as the point that it supports is wrong. Like, the two things which are claimed to be analogous in a certain important way, are in fact not analogous in that important way. (Or so I claim!)
(It’s like if I said “the sky is like a glass dome; if you fly high enough, you’ll crash into it; and since the sky is indestructible, much like a glass dome is, you also can’t break through it”. Well, no, you in fact will not crash into the sky; in this way, it is precisely not like a glass dome. And of course glass is totally destructible. The analogy successfully communicates my beliefs about the sky—that it’s a solid barrier which can be crashed into but not broken through. Those beliefs happen to be totally wrong. The glass dome analogy is “bad” in that sense.)
If you desire another analogy, most computer traffic is not malware or exploits, nevertheless it sure really matters a lot whether your specific message is malware or some kind of exploit.
As far as I can tell your comment doesn’t address this point directly?
True, I did not address that. I’ll do so now.
So, let’s recall what “it sure really matters a lot” means, specifically, in this context. The key claim from the earlier comment is this:
Communication doesn’t need to be “predominantly conflict” in order for it to be important to differentially signal that you are trying to have a more conflict focused or more descriptive-focused language
In the malware/exploit case, the analogous claim would be something like:
“Computer traffic doesn’t need to be ‘predominantly malware or exploits’ for it to be important to differentially signal that you are trying to send innocent, non-malicious data.”
Well… you can probably see the problem here. There’s basically two scenarios:
There exists a totally unambiguous, formally (which usually means: cryptographically) verifiable signal of a data packet or message being non-malicious. That signal gets sent, we check it, if it doesn’t check out we reject the data, the end. (If the signal can be faked after all, then we’re just fucked.)
There is no such verifiable signal. In this case, malicious traffic is going to be sending all the signals of non-maliciousness that “good” traffic sends. “A differential signal of innocence is being intentionally sent” is almost completely worthless as a basis for concluding that the data is non-malicious. Instead, we have to use complicated Bayesian methods to sort good from bad (as in email), or we have to enter into an arms race of requiring, and checking for, increasingly convoluted and esoteric micro-signals of validity (as in CAPTCHAs, UA sniffing, and all the other myriad tricks that websites use these days to protect themselves from abuse). (And any client that deliberately sends the signals we’re checking for is actually more likely to be a bad actor!)
This situation… is also not analogous to “posting on a public discussion forum”, which looks nothing like either of the above cases.
I mean, the analogy is “bad” insofar as the point that it supports is wrong. Like, the two things which are claimed to be analogous in a certain important way, are in fact not analogous in that important way. (Or so I claim!)
(It’s like if I said “the sky is like a glass dome; if you fly high enough, you’ll crash into it; and since the sky is indestructible, much like a glass dome is, you also can’t break through it”. Well, no, you in fact will not crash into the sky; in this way, it is precisely not like a glass dome. And of course glass is totally destructible. The analogy successfully communicates my beliefs about the sky—that it’s a solid barrier which can be crashed into but not broken through. Those beliefs happen to be totally wrong. The glass dome analogy is “bad” in that sense.)
True, I did not address that. I’ll do so now.
So, let’s recall what “it sure really matters a lot” means, specifically, in this context. The key claim from the earlier comment is this:
In the malware/exploit case, the analogous claim would be something like:
“Computer traffic doesn’t need to be ‘predominantly malware or exploits’ for it to be important to differentially signal that you are trying to send innocent, non-malicious data.”
Well… you can probably see the problem here. There’s basically two scenarios:
There exists a totally unambiguous, formally (which usually means: cryptographically) verifiable signal of a data packet or message being non-malicious. That signal gets sent, we check it, if it doesn’t check out we reject the data, the end. (If the signal can be faked after all, then we’re just fucked.)
There is no such verifiable signal. In this case, malicious traffic is going to be sending all the signals of non-maliciousness that “good” traffic sends. “A differential signal of innocence is being intentionally sent” is almost completely worthless as a basis for concluding that the data is non-malicious. Instead, we have to use complicated Bayesian methods to sort good from bad (as in email), or we have to enter into an arms race of requiring, and checking for, increasingly convoluted and esoteric micro-signals of validity (as in CAPTCHAs, UA sniffing, and all the other myriad tricks that websites use these days to protect themselves from abuse). (And any client that deliberately sends the signals we’re checking for is actually more likely to be a bad actor!)
This situation… is also not analogous to “posting on a public discussion forum”, which looks nothing like either of the above cases.