Thank you for taking the time to explain your comment to me in detail. I very much appreciate it. I’m trying hard to understand your comments. I will share here my thoughts as I try to interpret what you’ve written and attempt to understand the points that you’re making. Again, I appreciate the time that you’ve taken and the effort that you’ve made to clarify. I appreciate the patience that you have displayed, too.
If I understand correctly, the assertion that you’re making is that the example that I’ve given does not actually represent a valid instance of the phenomenon that I’m describing. (Namely, having difficulty explaining something to another person due to that person’s biases and preconceptions, cultural or otherwise.) The purported example, you suggest, does not represent a realistic situation in which deception can make communication more accurate and honest, and this should lead us all to lower our probability estimate of this phenomenon being a real thing at all. If the phenomenon is not real, or even if it is very rare, then we must downgrade our probability estimate of how useful intentional deception can be in improving accuracy and effectiveness in communication.
In response to my analogy of a finger pointing to the moon you stated, “It is directly relevant to the core question of whether your characterization of this aspect of reality is accurate, and whether your suggested actions are appropriate.” I interpret this to mean that my finger isn’t actually pointing at the moon at all. It’s pointing someplace else, and that’s what you’re addressing.
Tentatively assuming that I’ve correctly understood what you’re saying, I want to acknowledge the flaws in this example that I used. First of all, it’s obvious that slavery does not equal or require racism. A slave owner is not necessarily racist and, as you correctly pointed out, may not even be white. That’s a given. Additionally, given the capacity of the human mind to function irrationally, an extremely racist person might be able to recognize that a racial minority individual is more educated and smarter than they are on some level, while simultaneously believing that that individual is still intrinsically inferior— including, possibly, being intellectually inferior. In other words, on some level, in some part of their brain, a KKK clansman might recognize that a given black man is well-educated and intelligent, regardless of whether or not this clashes with their overall worldview. (Which might be that blacks are universally intrinsically inferior, including being intellectually inferior). In other words, the hypothetical claim that a deeply racist person cannot recognize, in any way, on any level, to any degree, that Neil deGrasse Tyson is smart and well-educated is patently false.
An additional flaw in my choice of example is that it’s hard to even imagine a situation in which one could feasibly lie about a person’s race— apart from a lie of omission. In contrast, if I had used the example of a religious person who assumes that all atheists are sinful minions of Satan, we can easily imagine situations in which I might have to actively lie in order to compensate for the religious person’s biases and preconceptions. If, for example, I proposed to such an individual that they should vote for someone, or hire someone, or accept someone as their daughter’s new boyfriend, the chances that they would directly ask me if this person is a Christian are quite high. In response to that, I might be forced to lie actively. Race-related lies would mostly be limited to lies of omission. Hence, not a good example.
With that out of the way, here’s where I think we’re most likely talking past each other. I’m not reasoning on the basis of probability estimations. I’m describing a phenomenon that I perceive as being intrinsically empirical, but it’s an empirical phenomenon that few Americans have the requisite life experience to recognize and know. Most people have never lived for an extended time in a very different culture, and even those who have, in my experience, don’t often understand what’s going on when they experience a clash of cultures. They may experience culture shock, but they often misinterpret what they’re experiencing. In my experience, cross-cultural communication is a perfect empirical example of a situation in which worldviews, values, and beliefs can so strongly interfere with the sharing of experiences and ideas that one can’t simply convey reality as one perceives it through a literal translation. Knowing that most Americans reading my comment likely lack this sort of life experience, and therefore might doubt that this cross-cultural thing is a real phenomenon, I added the slave owner example as an afterthought. I gave that example, not as evidence in support of a hypothesis, but rather as a communication tool, to point to a familiar empirical example of extreme cognitive bias and the challenges that it can pose to simply sharing your own subjective perspective and being understood. I wasn’t making a logical argument. I was attempting to find an empirical experience that the reader would be more likely to have encountered in their own daily life—racisim as a psychological bias.
The remaining unaddressed questions, then, are twofold:
1) Does this communication obstacle barrier which I am claiming to be an empirical reality exist at all? 2) Even assuming that it does— is well-meaning deception a viable tool for communication?
Addressing the second question first, I think it would be useful to clarify that I wasn’t intending to assert that lying (conveying a description that is not aligned with my own subjective understanding of things) is the best or only way to deal with the challenges that this purported empirical phenomenon poses. Rather, I was attempting to describe an additional form of honesty that the post’s author hadn’t mentioned, which I have used myself on multiple occasions and therefore know from experience exists. I’m not advocating it, or claiming that it’s the best way of dealing with this sort of problem. I’m simply reporting its existence as a form of honesty.
Addressing the first question, if the reader doubts that I am, in fact, describing an empirical phenomenon, and if the examples that I’ve given don’t serve to communicate the paradigm of this phenomenon in a compelling hypothetical way (making the possibility that such a phenomenon exist seem at least feasible), then I don’t think useful communication is possible here. If the examples I’ve given fall too far outside of the reader’s experience to be taken seriously, there’s no realistic hope of communicating satisfactorily on this question. I don’t think that Bayesian logic is a viable tool for conveying to another person the reality of this empirically-experienced phenomenon. (That cultural differences and personal biases can so strongly interfere with communication that counter-distorting one’s own utterances can actually serve to make communication more effective.) This is no one’s fault. Without some degree of common experience, communication is impossible even in principle.
You know, sometimes I think “my reaction to this comment is hardly worthy of a whole reply; I should just use one of them newfangled ‘react’ things”; and I log on to actual LW (as opposed to GW), and look for the react I want; and every time I do this, the react I want is not available.
For example, this comment I’m replying to would be perfect for an “obvious LLM slop” react. But there’s no such thing! Might this oversight be rectified, @habryka?
I agree it is poorly written, but I don’t think it is, strictly speaking, ‘LLM slop’. Or if it is, it’s not an LLM I am familiar with, or is an unusual usage pattern in some way… It’s just not written with the usual stylistic tics of ChatGPT (4o or o3), Claude-3/4, Gemini-2.5, or DeepSeek-r1.
For example, he uses a space after EM DASH but not before; no LLM does that (they either use no space or both before-after); he also uses ‘1) ’ number formatting, where LLMs invariably use ‘1. ’ or ‘#. ’ proper Markdown (and generally won’t add in stylistic redundancy like ‘are twofold’); he also doesn’t do the 4o ‘twist ending’ for his conclusion, the way a LLM would insist on. The use of sentence fragments is also unusual: LLMs insist on writing in whole sentences. The use of specific proper nouns like a ‘KKK clansmen’ or ‘Neil deGrasse Tyson’ are unusual for a LLM (the former because it is treading close to forbidden territory, and the latter because LLMs are conservative in talking about living people). Then there is the condescension: a LLM chatbot persona is highly condescending, but in covert, subtle ways, and requiring an appropriate context like tutoring, and they’re usually careful to avoid coming off as obviously condescending in a regular argumentative context like this and prefer sycophancy (presumably because it’s easy for a rater to notice a condescending style and get ticked off by it).
Hmm, I am not sure about the exact right wording, but yeah, I am into some kind of react that is “this looks like LLM slop”. I’ll think about adding it. A “too wordy” react or something like that would have also helped here.
Are you sure it’s good to provide confrontational/insulting/dismissive reacts? I think they give users an easy way to snipe at someone we disagree with or dislike, without providing any support for our criticism and without putting ourselves on the line in any way. (Yes, reacts can be downvoted, but this isn’t the same as making a comment that can be voted on and replied to.)
In effect, a harsh react is an asymmetrical, no-effort tool for making another user look or feel bad, and I don’t see why it’s necessary. If we don’t want to engage, we can always just downvote; if we want to provide more information than a downvote can convey, we can put in the small amount of effort required to write a brief reply.
Yep, it’s a difficult tradeoff, and we thought for a while about it. Overall I decided that it’s just too hard to have a react-palette that informs people about the local site culture without allowing negative/confrontational reacts.
Also one of the most frustrating things is having your interlocutor disappear without any explanation, and a one-react explanation is better than none, even if it’s a bit harsh.
Fair enough, thanks for explaining! Probably some of what I’m worried about can be mitigated by careful naming & descriptions. (e.g. I suspect you weren’t considering a literal “LLM slop” react, but if you were, I think something more gently and respectfully worded could be much less unpleasant to receive while conveying just as much useful information)
Thank you for taking the time to explain your comment to me in detail. I very much appreciate it. I’m trying hard to understand your comments. I will share here my thoughts as I try to interpret what you’ve written and attempt to understand the points that you’re making. Again, I appreciate the time that you’ve taken and the effort that you’ve made to clarify. I appreciate the patience that you have displayed, too.
If I understand correctly, the assertion that you’re making is that the example that I’ve given does not actually represent a valid instance of the phenomenon that I’m describing. (Namely, having difficulty explaining something to another person due to that person’s biases and preconceptions, cultural or otherwise.) The purported example, you suggest, does not represent a realistic situation in which deception can make communication more accurate and honest, and this should lead us all to lower our probability estimate of this phenomenon being a real thing at all. If the phenomenon is not real, or even if it is very rare, then we must downgrade our probability estimate of how useful intentional deception can be in improving accuracy and effectiveness in communication.
In response to my analogy of a finger pointing to the moon you stated, “It is directly relevant to the core question of whether your characterization of this aspect of reality is accurate, and whether your suggested actions are appropriate.” I interpret this to mean that my finger isn’t actually pointing at the moon at all. It’s pointing someplace else, and that’s what you’re addressing.
Tentatively assuming that I’ve correctly understood what you’re saying, I want to acknowledge the flaws in this example that I used. First of all, it’s obvious that slavery does not equal or require racism. A slave owner is not necessarily racist and, as you correctly pointed out, may not even be white. That’s a given. Additionally, given the capacity of the human mind to function irrationally, an extremely racist person might be able to recognize that a racial minority individual is more educated and smarter than they are on some level, while simultaneously believing that that individual is still intrinsically inferior— including, possibly, being intellectually inferior. In other words, on some level, in some part of their brain, a KKK clansman might recognize that a given black man is well-educated and intelligent, regardless of whether or not this clashes with their overall worldview. (Which might be that blacks are universally intrinsically inferior, including being intellectually inferior). In other words, the hypothetical claim that a deeply racist person cannot recognize, in any way, on any level, to any degree, that Neil deGrasse Tyson is smart and well-educated is patently false.
An additional flaw in my choice of example is that it’s hard to even imagine a situation in which one could feasibly lie about a person’s race— apart from a lie of omission. In contrast, if I had used the example of a religious person who assumes that all atheists are sinful minions of Satan, we can easily imagine situations in which I might have to actively lie in order to compensate for the religious person’s biases and preconceptions. If, for example, I proposed to such an individual that they should vote for someone, or hire someone, or accept someone as their daughter’s new boyfriend, the chances that they would directly ask me if this person is a Christian are quite high. In response to that, I might be forced to lie actively. Race-related lies would mostly be limited to lies of omission. Hence, not a good example.
With that out of the way, here’s where I think we’re most likely talking past each other. I’m not reasoning on the basis of probability estimations. I’m describing a phenomenon that I perceive as being intrinsically empirical, but it’s an empirical phenomenon that few Americans have the requisite life experience to recognize and know. Most people have never lived for an extended time in a very different culture, and even those who have, in my experience, don’t often understand what’s going on when they experience a clash of cultures. They may experience culture shock, but they often misinterpret what they’re experiencing. In my experience, cross-cultural communication is a perfect empirical example of a situation in which worldviews, values, and beliefs can so strongly interfere with the sharing of experiences and ideas that one can’t simply convey reality as one perceives it through a literal translation. Knowing that most Americans reading my comment likely lack this sort of life experience, and therefore might doubt that this cross-cultural thing is a real phenomenon, I added the slave owner example as an afterthought. I gave that example, not as evidence in support of a hypothesis, but rather as a communication tool, to point to a familiar empirical example of extreme cognitive bias and the challenges that it can pose to simply sharing your own subjective perspective and being understood. I wasn’t making a logical argument. I was attempting to find an empirical experience that the reader would be more likely to have encountered in their own daily life—racisim as a psychological bias.
The remaining unaddressed questions, then, are twofold:
1) Does this communication obstacle barrier which I am claiming to be an empirical reality exist at all?
2) Even assuming that it does— is well-meaning deception a viable tool for communication?
Addressing the second question first, I think it would be useful to clarify that I wasn’t intending to assert that lying (conveying a description that is not aligned with my own subjective understanding of things) is the best or only way to deal with the challenges that this purported empirical phenomenon poses. Rather, I was attempting to describe an additional form of honesty that the post’s author hadn’t mentioned, which I have used myself on multiple occasions and therefore know from experience exists. I’m not advocating it, or claiming that it’s the best way of dealing with this sort of problem. I’m simply reporting its existence as a form of honesty.
Addressing the first question, if the reader doubts that I am, in fact, describing an empirical phenomenon, and if the examples that I’ve given don’t serve to communicate the paradigm of this phenomenon in a compelling hypothetical way (making the possibility that such a phenomenon exist seem at least feasible), then I don’t think useful communication is possible here. If the examples I’ve given fall too far outside of the reader’s experience to be taken seriously, there’s no realistic hope of communicating satisfactorily on this question. I don’t think that Bayesian logic is a viable tool for conveying to another person the reality of this empirically-experienced phenomenon. (That cultural differences and personal biases can so strongly interfere with communication that counter-distorting one’s own utterances can actually serve to make communication more effective.) This is no one’s fault. Without some degree of common experience, communication is impossible even in principle.
You know, sometimes I think “my reaction to this comment is hardly worthy of a whole reply; I should just use one of them newfangled ‘react’ things”; and I log on to actual LW (as opposed to GW), and look for the react I want; and every time I do this, the react I want is not available.
For example, this comment I’m replying to would be perfect for an “obvious LLM slop” react. But there’s no such thing! Might this oversight be rectified, @habryka?
I agree it is poorly written, but I don’t think it is, strictly speaking, ‘LLM slop’. Or if it is, it’s not an LLM I am familiar with, or is an unusual usage pattern in some way… It’s just not written with the usual stylistic tics of ChatGPT (4o or o3), Claude-3/4, Gemini-2.5, or DeepSeek-r1.
For example, he uses a space after EM DASH but not before; no LLM does that (they either use no space or both before-after); he also uses ‘1) ’ number formatting, where LLMs invariably use ‘1. ’ or ‘#. ’ proper Markdown (and generally won’t add in stylistic redundancy like ‘are twofold’); he also doesn’t do the 4o ‘twist ending’ for his conclusion, the way a LLM would insist on. The use of sentence fragments is also unusual: LLMs insist on writing in whole sentences. The use of specific proper nouns like a ‘KKK clansmen’ or ‘Neil deGrasse Tyson’ are unusual for a LLM (the former because it is treading close to forbidden territory, and the latter because LLMs are conservative in talking about living people). Then there is the condescension: a LLM chatbot persona is highly condescending, but in covert, subtle ways, and requiring an appropriate context like tutoring, and they’re usually careful to avoid coming off as obviously condescending in a regular argumentative context like this and prefer sycophancy (presumably because it’s easy for a rater to notice a condescending style and get ticked off by it).
He probably used a LLM and lightly edited it. The non-LLM punctuation and references would come from the editing.
Hmm, I am not sure about the exact right wording, but yeah, I am into some kind of react that is “this looks like LLM slop”. I’ll think about adding it. A “too wordy” react or something like that would have also helped here.
Are you sure it’s good to provide confrontational/insulting/dismissive reacts? I think they give users an easy way to snipe at someone we disagree with or dislike, without providing any support for our criticism and without putting ourselves on the line in any way. (Yes, reacts can be downvoted, but this isn’t the same as making a comment that can be voted on and replied to.)
In effect, a harsh react is an asymmetrical, no-effort tool for making another user look or feel bad, and I don’t see why it’s necessary. If we don’t want to engage, we can always just downvote; if we want to provide more information than a downvote can convey, we can put in the small amount of effort required to write a brief reply.
Yep, it’s a difficult tradeoff, and we thought for a while about it. Overall I decided that it’s just too hard to have a react-palette that informs people about the local site culture without allowing negative/confrontational reacts.
Also one of the most frustrating things is having your interlocutor disappear without any explanation, and a one-react explanation is better than none, even if it’s a bit harsh.
Fair enough, thanks for explaining! Probably some of what I’m worried about can be mitigated by careful naming & descriptions. (e.g. I suspect you weren’t considering a literal “LLM slop” react, but if you were, I think something more gently and respectfully worded could be much less unpleasant to receive while conveying just as much useful information)