I want to be able to talk about the tribal-ish dynamics in how LW debates AI (which feels indeed pretty tribal and bad on multiple sides).
A thing that feels tricky about this is that talking about it in any reasonable concise way involves, well, grouping people into groups, which is sort of playing into the very tribal dynamic I’d like us to back out of.
If we weren’t so knee-deep in the problem, it’d seem plausible that the right move is just “try to be the non-tribal conversation you want to exist in the world.” But, we seem pretty knee-deep in into it and a few marginal reasonable conversations aren’t going to solve the problem.
My default plan is “just talk about the groups, add a couple caveats about it”, which seems better to me than not-doing-that. But, I do wish I had a better option and curious for people’s takes.
Try hard to name the positive things that different groups believe in rather than simply what they don’t like.
Try hard to name their strengths than simply what (others might see as) their flaws.
Default to talk fairly abstractly when possible, to avoid accidentally re-litigating a lot of specific conflicts that aren’t necessary, and avoid making people feel singled out. (This is somewhat in conflict with the virtues of concreteness and precision; not quite sure how to describe the synthesis of these.)
Be very hesitating to reify groups or forces when it’s not necessary. Due to the way human psychology works, it’s very easy to bring into existence a political group or battle that didn’t exist by careless use of words and names. I think the biggest risk is in reifying groups or tribes or conflicts that don’t exist and don’t need to exist. (Link to me writing about this before.)
For instance, if Tom proposes a group norm of always including epistemic statuses at the top of posts, and there’s a conflict about it, there are better and worse ways of naming sides.
“The people who hate Tom” and “The people who like Tom” is worse than “The people for mandatory epistemic statuses” and “The people against mandatory epistemic statuses”.
Also “The people who care about epistemics” and “The people who don’t care about epistemics” is worse relative to “The people who care about epistemics” vs “The people who care about creative freedom in posts” (or whatever captures the actual counterarguments and the values being traded off).
Of course, don’t say false things, these are heuristics to help things stay healthy and counteract humans biases, rather than universal truths.
Some of my instincts are opposite to this. Full agreement with naming the positives in each group/position.
I think abstraction is often the enemy of crux-finding. When people are in far-mode, they tend to ignore the things that make for clear points of disagreement, and just assume that it’s a value difference rather than a belief difference. I think most of the tribal failures to communicate are from the default of talking abstractly.
Agreed that it’s often not necessary to identify or reinforce the group boundaries. Focus on the disagreements, and figure out how to proceed in the world where we don’t all agree on things.
I think the example of epistemic status recommendation is a good one—this isn’t about groups, it’s about a legitimate disagreement in when it’s useful and when it’s wasteful or misleading. It’s useful if it gets debated (and I have to say, I haven’t noticed this debate) to clarify that it’s OK if it’s the poster/commenter choice, and it’s just another tool for communication.
I’m clueless enough, and engineering-mind enough, that hypothetical examples don’t help me understand or solve a problem.
I suspect I should have just stayed out, or asked for a clearer problem description. I don’t really feel tribal-ish in myself or my interactions on the site, so I suspect I’m just not part of the problem nor solution. PLEASE let me know (privately or publically) if this is incorrect.
It seems like everyone is tired of hearing every other group’s opinions about AI. Since like 2005, Eliezer has been hearing people say a superintelligent AI surely won’t be clever, and has had enough. The average LW reader is tired of hearing obviously dumb Marc Andreessen accelerationist opinions. The average present harms person wants everyone to stop talking about the unrealistic apocalypse when artists are being replaced by shitty AI art. The average accelerationist wants everyone to stop talking about the unrealistic apocalypse when they could literally cure cancer and save Western civilization. The average NeurIPS author is sad that LLMs have made their expertise in Gaussian kernel wobblification irrelevant. Various subgroups of LW readers are dissatisfied with people who think reward is the optimization target, Eliezer is always right, or discussion is too tribal, or whatever.
With this combined with how Twitter distorts discourse is it any wonder that people need to process things as “oh, that’s just another claim by X group, time to dismiss”? Anyway I think naming the groups isn’t the problem, and so naming the groups in the post isn’t contributing to the problem much. The important thing to address is why people find it advantageous to track these groups.
fwiw this seems basically what’s happening to me. (the comment reads kinda defeatist about it, but, not entirely sure what you were going for, and the model seems right, if incomplete. [edit: I agree that several of the statements about entire groups are not literally true for the entire group, when I say ‘basically right’ I mean “the overall dynamic is an important gear, and I think among each group there’s a substantial chunk of people who are tired in the way Thomas depicts”])
On my own end, when I’m feeling most tribal-ish or triggered, it’s when someone/people are looking to me like they are “willfully not getting it”. And, I’ve noticed a few times on my end where I’m sort of willfully not getting it (sometimes while trying to do some kind of intellectual bridging, which I bet is particularly annoying).
I’m not currently optimistic about solving twitter.
The angle I felt most optimistic about on LW is aiming for a state where a few prominent-ish* people… feel like they get understood by each other at the same time, and can chill out at the same time. This maybe works IFF there are some people who:
a) aren’t completely burned out on the “try to communicate / actually have a good group epistemic culture about AI” project.
b) are prominent / intellectually-leader-y enough that, if they (a few people on multiple sides/angles of the AI-situation-issue), all chilled out at the same time, it’d meaningfully radiate out and give people more of a sense of “okay things are more chill now.”
c) they are willing to actually seriously doublecrux about it (i.e. having some actual open curiosity, both people trying to paraphrase/pass ITTs, both people trying to locate and articulate the cruxes beneath their cruxes, both people making an earnest effort to be open to changing their mind)
Shoulder Eliezer/Nate/JohnW/Rohin/Paul pop up to say “this has been tried, dude, for hundreds of hours”, and my response is
has it, really? The only person I ever saw seeming like they were trying to do the curious/actually-maybe-learn move was Richard Ngo in the MIRI Dialogues. (Probably there have been some other attempts in sporadic LW convos, but, not in a really focused-effort way, and not that I recall where at least one participant wasn’t sort of radiating “ugh I hope we can get this over with”)
I dunno dude I think we’ve actually made progress, at both articulating subtle underlying intellectual cruxes, and at improving our conversational technology.
(Maybe @Eli Tyre can say if he thinks True Doublecrux has ever been tried on this cluster of topics)
–––
...that was my angle something like 3 months ago. Since then, someone argued another key piece of the model:
There is something in the ecosystem that is going to keep generating prominent Things-Are-Probably-Okay people, no matter how much doublecruxing and changing minds happens. People in fact really want things to be okay, so whenever someone shows up with some kind of sophisticated sounding reasoning for why maybe things are okay, some kind of egregore will re-orient and elevate them to the position of “Things-Are-Probably-Okay-ish intellectual leader”. (There might separately be something that keeps generating Things-Probably-Aren’t-Okay people, maybe a la Val’s model here. I don’t think it tends to generate intellectual thought leaders, but, might be wrong).
If the true underlying reality turns out to be “actually, one should be more optimistic about alignment difficulty, or whether leading companies will do reasonable things by default”, then, hypothetically, it could resolve in the other direction. But, if there’s not a part of a plan that somehow deals with it, it makes sense for Not-Okay-ists to be less invested in actually trying.
–––
Relatedly: new people are going to keep showing up, who haven’t been through 100 hours of attempted nuanced arguing, who don’t get all the points, and people will keep being annoyed at them.
And there’s something like… intergenerational trauma, where the earlier generation of people who have attempted to communicate and are just completely fed up with people not listening, are often rude/dismissive of people who are still in the process of thinking through the issue (though, notably, sometimes while also still kinda willfully not listening), and then the newer generation is like “christ why was that guy so dismissive?”
In particular, the newer person might totally have a valid nontrivial point they are making, but it’s entangled with some other point the other person thinks is obviously dumb, so older Not-Okay-ists end up dismissing the whole thing.
–––
Originally this used “pessimist” and “optimist” as the shorthand, but, I decided I didn’t like that because it is easier to interpret as “optimism/pessimism as a disposition, rather than a property of your current beliefs”, which seemed to do more bad reifying)
Strong agree. @Raemon Maybe just have a button that says “I don’t want to debate your AI take and will tap out” that people can use whenever a non-AI conversation ends up steering into an AI conversation.
AI is a religious divide at this point, and people who consistently bring religious debates into casual conversation are correctly booted out of the social group. There is an approved place and time for it.
This was not my intention, though I could have been more careful. Here are my reasons
The original comment seemed really vague, in a way that often dooms conversations. Little progress can be made on most problems without pointing out the specific key reasons for them. The key point to make is that tribalism in this case doesn’t arise spontaneously based on identities alone, it has micro level causes which have macro level causes
I thought Ray’s wanted to discuss what to do for a broader communication strategy, so replying in shortform would be fine because the output would get >20x the views (this is where I could have realized LW shortform has a high profile now, and toned it down somehow), rather than open up the conversation here
I am also frustrated about tribalism and reporting from experience about what I notice in a somewhat exaggerated way. If there is defeatism this is the source, though I don’t think addressing it is impossible, I just don’t have any ideas
If people replied to me with object level twitterbrained comments about how eg everyone has to unite against Marc Andreessen I would be super sad. Hopefully we’re better than that.
The comment painted lots of groups of people with a broad brush, mainly associating them with a negative feelings that I don’t believe most of them experience very much or at all, it (I think?) implied it had named all the groups (while naming ~none of the groups I natively think of), and then said that naming groups wasn’t a problem to be worried about.
It’s not norm-violating, but IMO it was not a good move in the dance of “sanely discuss local politics”.
I’ve found it’s often actually better for group processing of ideas-and-emotions for there to be nonzero ranting about what you really feel in your heart even if not fully accurate. (This is also very risky, and having it be net-positive is tricky, but, often when I see people trying to dance around the venting you can feel it leaking through the veneer of politeness)
I think Thomas’s comment is slightly exaggerated but… idk basically correct in overall thrust, and an important gear? (I agree whenever you say “this group thinks X”, obviously lots of people in that group will not think X)
While the comment paints people with a negative brush, it does paint a bunch of different people in a negative brush, such that it’s more about painting the overall dynamic in a negative light than the individual people, in my reading.
I definitely agree it is not the best kind of comment Thomas could have written and I hope it’s not representative of the average quality of comment in this discussion, it just seemed to me the LW mod reactions to it were extreme and slightly isolated-demand-for-rigor-y.
(I do want this thread to be one where overall people are some kind of politically careful, but I don’t actually have that strong a guess as to what the best norms are. I view this as sort of the prelude to a later conversation with a better-set container)
I agree that ranting and emotional writing is part of a healthy processing of information in humans. Insofar as someone is ranting imprecisely and recklessly about groups and their attitudes, when trying to understand local politics, I wish that some care is taken to note that it’s not the default standard for comments, that it’s more likely to produce inaccuracies and be misleading, rather than to walk straight into it seemingly unaware of the line being crossed. It’s the standard thing about line crossing: the problem isn’t in choosing to cross the line, it’s in seemingly not being aware that there is a line at all.
I was aware there is some line but thought it was “don’t ignite a conversation that derails this one” rather than “don’t say inaccurate things about groups”, which is why I listed lots of groups rather than one and declined to list actively contentious topics like timelines or Matthew Barnett’s opinions
I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
It’s not content free! I gave a bunch of examples of useful heuristics for navigating discussions of tribalism in my other comment, and Thomas did the opposite of most of them (named groups by things they didn’t like rather than things they stood for, avoid carelessly reifying group attitudes, avoid needlessly relitigating old conflicts).
I also don’t think it’s tone-policing. It’s about how we talk about a subject that humans are pretty biased about. Seems about as tone policing as if I said “let’s try to discuss the problem from many angles before proposing solutions” and then someone came in and proposed solutions and I pushed back on that. There is real advice about how to discuss difficult topics well and it’s not all centrally free speech violations.
I think the thing Zack meant was content-free was your response, to Thomas’ response, which didn’t actually explain the gears of why Thomas’ comment felt tramplingly bad.
I see! That makes sense. I hoped it was clear from the surrounding context in the thread, but I will endeavor in future to link to my comments elsethread for reference.
(I will abstractly state that I feel negatively towards the group dynamics around some AI debates in the broader EA/LW/AI/X-derisking sphere, e.g. about timelines; so, affirming that I feel “knee-deep” in something, or I would if my primary activity were about that; and affirming that addressing this in a gradual-unraveling way could be helpful.)
My take is that this does need to be addressed, but it should be done very carefully so as not to make the dynamic worse.
I have many post drafts on this topic. I haven’t published any because I’m very much afraid of making the tribal conflict worse, or of being ostracized from one or both tribes.
Here’s an off-the-cuff attempt to address the dynamics without pointing any fingers or even naming names. It might be too abstract to serve the purposes you have in mind, but hopefully it’s at least relevant to the issue.
I think it’s wise (or even crucial) to be quite careful, polite, and generous when addressing views you disagree with on alignment. Failing to do so runs a large risk that your arguments will backfire and delay converging on the truth of crucial matters. Strongly worded arguments can engage emotions and ideological affiliations. The field of alignment may not have the leeway for internal conflict distorting our beliefs and distracting us from making rapid progress.
I do think it would be useful to address those tribal-ish dynamics, because I think they’re not just distorting the discussions, they’re distorting our individual epistemics. I think motivated reasoning is a powerful force, in conjunction with cognitive limitations that limit us from weighing all evidence and arguments in complex domains.
I’m less worried about naming the groups than I am about causing more logic-distorting, emotional reactions by speaking ill of dearly-held beliefs, arguments, and hopes. When naming the group dynamics, it might be helpful to stress individual variations, e.g. “individuals with more of the empiricist(theorist) outlook”
In most of society, arguments don’t do much to change beliefs. It’s better in more logical/rational/empirically leaning subcultures like LessWrong, but we shouldn’t assume we’re immune to emotions distorting our reasoning. Forceful arguments are often implicitly oppositional, confrontational, and insulting, and so have blowback effects that can entrench existing views and ignite tribal conflicts.
Science gets past this on average, given enough time. But the aphorism “science progresses one funeral at a time” should be chilling in this field.
We probably don’t have that long to solve alignment, so we’ve got to do better than traditional science. The alignment community is much more aware of and concerned with communication and emotional dynamics than the field I emigrated from, and probably from most other sciences. So I think we can do much better if we try.
Steve Byrnes’ Valence sequence is not directly about tribal dynamics, but it is indirectly quite relevant. It’s about the psychological mechanisms that tie idea, arguments, and group identities to emotional responses (it focuses on valence but the same steering system mechanisms apply to other specific emotional responses as well). It’s not a quick read, but it’s a fascinating lens for analyzing why we believe what we do.
I’m actually not sure what this refers to. E.g. when Boaz Barak’s posts spark some discussions it seems pretty civil and centered on the issue. The main disagreements don’t necessarily get resolved, but at least they were identified, and I didn’t get any serious signs of tribalism. But maybe this is me skipping over the offending comments (I tend to ignore things that don’t feel intellectually interesting), or this is not an example of the dynamic that you refer to?
I want to be able to talk about the tribal-ish dynamics in how LW debates AI (which feels indeed pretty tribal and bad on multiple sides).
A thing that feels tricky about this is that talking about it in any reasonable concise way involves, well, grouping people into groups, which is sort of playing into the very tribal dynamic I’d like us to back out of.
If we weren’t so knee-deep in the problem, it’d seem plausible that the right move is just “try to be the non-tribal conversation you want to exist in the world.” But, we seem pretty knee-deep in into it and a few marginal reasonable conversations aren’t going to solve the problem.
My default plan is “just talk about the groups, add a couple caveats about it”, which seems better to me than not-doing-that. But, I do wish I had a better option and curious for people’s takes.
Some instincts:
Try hard to name the positive things that different groups believe in rather than simply what they don’t like.
Try hard to name their strengths than simply what (others might see as) their flaws.
Default to talk fairly abstractly when possible, to avoid accidentally re-litigating a lot of specific conflicts that aren’t necessary, and avoid making people feel singled out. (This is somewhat in conflict with the virtues of concreteness and precision; not quite sure how to describe the synthesis of these.)
Be very hesitating to reify groups or forces when it’s not necessary. Due to the way human psychology works, it’s very easy to bring into existence a political group or battle that didn’t exist by careless use of words and names. I think the biggest risk is in reifying groups or tribes or conflicts that don’t exist and don’t need to exist. (Link to me writing about this before.)
For instance, if Tom proposes a group norm of always including epistemic statuses at the top of posts, and there’s a conflict about it, there are better and worse ways of naming sides.
“The people who hate Tom” and “The people who like Tom” is worse than “The people for mandatory epistemic statuses” and “The people against mandatory epistemic statuses”.
Also “The people who care about epistemics” and “The people who don’t care about epistemics” is worse relative to “The people who care about epistemics” vs “The people who care about creative freedom in posts” (or whatever captures the actual counterarguments and the values being traded off).
Of course, don’t say false things, these are heuristics to help things stay healthy and counteract humans biases, rather than universal truths.
Some of my instincts are opposite to this. Full agreement with naming the positives in each group/position.
I think abstraction is often the enemy of crux-finding. When people are in far-mode, they tend to ignore the things that make for clear points of disagreement, and just assume that it’s a value difference rather than a belief difference. I think most of the tribal failures to communicate are from the default of talking abstractly.
Agreed that it’s often not necessary to identify or reinforce the group boundaries. Focus on the disagreements, and figure out how to proceed in the world where we don’t all agree on things.
I think the example of epistemic status recommendation is a good one—this isn’t about groups, it’s about a legitimate disagreement in when it’s useful and when it’s wasteful or misleading. It’s useful if it gets debated (and I have to say, I haven’t noticed this debate) to clarify that it’s OK if it’s the poster/commenter choice, and it’s just another tool for communication.
FYI it is a hypothetical example.
I’m clueless enough, and engineering-mind enough, that hypothetical examples don’t help me understand or solve a problem.
I suspect I should have just stayed out, or asked for a clearer problem description. I don’t really feel tribal-ish in myself or my interactions on the site, so I suspect I’m just not part of the problem nor solution. PLEASE let me know (privately or publically) if this is incorrect.
(I think, by ‘positive’, Ben meant “explain positions that the group agrees with” rather than “say some nice things about each group”)
It seems like everyone is tired of hearing every other group’s opinions about AI. Since like 2005, Eliezer has been hearing people say a superintelligent AI surely won’t be clever, and has had enough. The average LW reader is tired of hearing obviously dumb Marc Andreessen accelerationist opinions. The average present harms person wants everyone to stop talking about the unrealistic apocalypse when artists are being replaced by shitty AI art. The average accelerationist wants everyone to stop talking about the unrealistic apocalypse when they could literally cure cancer and save Western civilization. The average NeurIPS author is sad that LLMs have made their expertise in Gaussian kernel wobblification irrelevant. Various subgroups of LW readers are dissatisfied with people who think reward is the optimization target, Eliezer is always right, or discussion is too tribal, or whatever.
With this combined with how Twitter distorts discourse is it any wonder that people need to process things as “oh, that’s just another claim by X group, time to dismiss”? Anyway I think naming the groups isn’t the problem, and so naming the groups in the post isn’t contributing to the problem much. The important thing to address is why people find it advantageous to track these groups.
fwiw this seems basically what’s happening to me. (the comment reads kinda defeatist about it, but, not entirely sure what you were going for, and the model seems right, if incomplete. [edit: I agree that several of the statements about entire groups are not literally true for the entire group, when I say ‘basically right’ I mean “the overall dynamic is an important gear, and I think among each group there’s a substantial chunk of people who are tired in the way Thomas depicts”])
On my own end, when I’m feeling most tribal-ish or triggered, it’s when someone/people are looking to me like they are “willfully not getting it”. And, I’ve noticed a few times on my end where I’m sort of willfully not getting it (sometimes while trying to do some kind of intellectual bridging, which I bet is particularly annoying).
I’m not currently optimistic about solving twitter.
The angle I felt most optimistic about on LW is aiming for a state where a few prominent-ish* people… feel like they get understood by each other at the same time, and can chill out at the same time. This maybe works IFF there are some people who:
a) aren’t completely burned out on the “try to communicate / actually have a good group epistemic culture about AI” project.
b) are prominent / intellectually-leader-y enough that, if they (a few people on multiple sides/angles of the AI-situation-issue), all chilled out at the same time, it’d meaningfully radiate out and give people more of a sense of “okay things are more chill now.”
c) they are willing to actually seriously doublecrux about it (i.e. having some actual open curiosity, both people trying to paraphrase/pass ITTs, both people trying to locate and articulate the cruxes beneath their cruxes, both people making an earnest effort to be open to changing their mind)
Shoulder Eliezer/Nate/JohnW/Rohin/Paul pop up to say “this has been tried, dude, for hundreds of hours”, and my response is
has it, really? The only person I ever saw seeming like they were trying to do the curious/actually-maybe-learn move was Richard Ngo in the MIRI Dialogues. (Probably there have been some other attempts in sporadic LW convos, but, not in a really focused-effort way, and not that I recall where at least one participant wasn’t sort of radiating “ugh I hope we can get this over with”)
I dunno dude I think we’ve actually made progress, at both articulating subtle underlying intellectual cruxes, and at improving our conversational technology.
(Maybe @Eli Tyre can say if he thinks True Doublecrux has ever been tried on this cluster of topics)
–––
...that was my angle something like 3 months ago. Since then, someone argued another key piece of the model:
There is something in the ecosystem that is going to keep generating prominent Things-Are-Probably-Okay people, no matter how much doublecruxing and changing minds happens. People in fact really want things to be okay, so whenever someone shows up with some kind of sophisticated sounding reasoning for why maybe things are okay, some kind of egregore will re-orient and elevate them to the position of “Things-Are-Probably-Okay-ish intellectual leader”. (There might separately be something that keeps generating Things-Probably-Aren’t-Okay people, maybe a la Val’s model here. I don’t think it tends to generate intellectual thought leaders, but, might be wrong).
If the true underlying reality turns out to be “actually, one should be more optimistic about alignment difficulty, or whether leading companies will do reasonable things by default”, then, hypothetically, it could resolve in the other direction. But, if there’s not a part of a plan that somehow deals with it, it makes sense for Not-Okay-ists to be less invested in actually trying.
–––
Relatedly: new people are going to keep showing up, who haven’t been through 100 hours of attempted nuanced arguing, who don’t get all the points, and people will keep being annoyed at them.
And there’s something like… intergenerational trauma, where the earlier generation of people who have attempted to communicate and are just completely fed up with people not listening, are often rude/dismissive of people who are still in the process of thinking through the issue (though, notably, sometimes while also still kinda willfully not listening), and then the newer generation is like “christ why was that guy so dismissive?”
In particular, the newer person might totally have a valid nontrivial point they are making, but it’s entangled with some other point the other person thinks is obviously dumb, so older Not-Okay-ists end up dismissing the whole thing.
–––
Originally this used “pessimist” and “optimist” as the shorthand, but, I decided I didn’t like that because it is easier to interpret as “optimism/pessimism as a disposition, rather than a property of your current beliefs”, which seemed to do more bad reifying)
Strong agree. @Raemon Maybe just have a button that says “I don’t want to debate your AI take and will tap out” that people can use whenever a non-AI conversation ends up steering into an AI conversation.
AI is a religious divide at this point, and people who consistently bring religious debates into casual conversation are correctly booted out of the social group. There is an approved place and time for it.
I feel like Ray was trying to open up this conversation with respect and carefulness and then you kind of trampled over that.
This was not my intention, though I could have been more careful. Here are my reasons
The original comment seemed really vague, in a way that often dooms conversations. Little progress can be made on most problems without pointing out the specific key reasons for them. The key point to make is that tribalism in this case doesn’t arise spontaneously based on identities alone, it has micro level causes which have macro level causes
I thought Ray’s wanted to discuss what to do for a broader communication strategy, so replying in shortform would be fine because the output would get >20x the views (this is where I could have realized LW shortform has a high profile now, and toned it down somehow), rather than open up the conversation here
I am also frustrated about tribalism and reporting from experience about what I notice in a somewhat exaggerated way. If there is defeatism this is the source, though I don’t think addressing it is impossible, I just don’t have any ideas
If people replied to me with object level twitterbrained comments about how eg everyone has to unite against Marc Andreessen I would be super sad. Hopefully we’re better than that.
fwiw I thought Thomas’ comment was fine, if a bit defeatist-feeling.
The comment painted lots of groups of people with a broad brush, mainly associating them with a negative feelings that I don’t believe most of them experience very much or at all, it (I think?) implied it had named all the groups (while naming ~none of the groups I natively think of), and then said that naming groups wasn’t a problem to be worried about.
It’s not norm-violating, but IMO it was not a good move in the dance of “sanely discuss local politics”.
A few reasons I don’t mind the Thomas comment:
I’ve found it’s often actually better for group processing of ideas-and-emotions for there to be nonzero ranting about what you really feel in your heart even if not fully accurate. (This is also very risky, and having it be net-positive is tricky, but, often when I see people trying to dance around the venting you can feel it leaking through the veneer of politeness)
I think Thomas’s comment is slightly exaggerated but… idk basically correct in overall thrust, and an important gear? (I agree whenever you say “this group thinks X”, obviously lots of people in that group will not think X)
While the comment paints people with a negative brush, it does paint a bunch of different people in a negative brush, such that it’s more about painting the overall dynamic in a negative light than the individual people, in my reading.
I definitely agree it is not the best kind of comment Thomas could have written and I hope it’s not representative of the average quality of comment in this discussion, it just seemed to me the LW mod reactions to it were extreme and slightly isolated-demand-for-rigor-y.
(I do want this thread to be one where overall people are some kind of politically careful, but I don’t actually have that strong a guess as to what the best norms are. I view this as sort of the prelude to a later conversation with a better-set container)
I think the dynamic Thomas pointed to is more helpful and accurate than the specifics, which seem to me like inaccurate glosses.
I agree that ranting and emotional writing is part of a healthy processing of information in humans. Insofar as someone is ranting imprecisely and recklessly about groups and their attitudes, when trying to understand local politics, I wish that some care is taken to note that it’s not the default standard for comments, that it’s more likely to produce inaccuracies and be misleading, rather than to walk straight into it seemingly unaware of the line being crossed. It’s the standard thing about line crossing: the problem isn’t in choosing to cross the line, it’s in seemingly not being aware that there is a line at all.
I was aware there is some line but thought it was “don’t ignite a conversation that derails this one” rather than “don’t say inaccurate things about groups”, which is why I listed lots of groups rather than one and declined to list actively contentious topics like timelines or Matthew Barnett’s opinions
I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
It’s not content free! I gave a bunch of examples of useful heuristics for navigating discussions of tribalism in my other comment, and Thomas did the opposite of most of them (named groups by things they didn’t like rather than things they stood for, avoid carelessly reifying group attitudes, avoid needlessly relitigating old conflicts).
I also don’t think it’s tone-policing. It’s about how we talk about a subject that humans are pretty biased about. Seems about as tone policing as if I said “let’s try to discuss the problem from many angles before proposing solutions” and then someone came in and proposed solutions and I pushed back on that. There is real advice about how to discuss difficult topics well and it’s not all centrally free speech violations.
I think the thing Zack meant was content-free was your response, to Thomas’ response, which didn’t actually explain the gears of why Thomas’ comment felt tramplingly bad.
I see! That makes sense. I hoped it was clear from the surrounding context in the thread, but I will endeavor in future to link to my comments elsethread for reference.
What are the groups?
(I will abstractly state that I feel negatively towards the group dynamics around some AI debates in the broader EA/LW/AI/X-derisking sphere, e.g. about timelines; so, affirming that I feel “knee-deep” in something, or I would if my primary activity were about that; and affirming that addressing this in a gradual-unraveling way could be helpful.)
My take is that this does need to be addressed, but it should be done very carefully so as not to make the dynamic worse.
I have many post drafts on this topic. I haven’t published any because I’m very much afraid of making the tribal conflict worse, or of being ostracized from one or both tribes.
Here’s an off-the-cuff attempt to address the dynamics without pointing any fingers or even naming names. It might be too abstract to serve the purposes you have in mind, but hopefully it’s at least relevant to the issue.
I think it’s wise (or even crucial) to be quite careful, polite, and generous when addressing views you disagree with on alignment. Failing to do so runs a large risk that your arguments will backfire and delay converging on the truth of crucial matters. Strongly worded arguments can engage emotions and ideological affiliations. The field of alignment may not have the leeway for internal conflict distorting our beliefs and distracting us from making rapid progress.
I do think it would be useful to address those tribal-ish dynamics, because I think they’re not just distorting the discussions, they’re distorting our individual epistemics. I think motivated reasoning is a powerful force, in conjunction with cognitive limitations that limit us from weighing all evidence and arguments in complex domains.
I’m less worried about naming the groups than I am about causing more logic-distorting, emotional reactions by speaking ill of dearly-held beliefs, arguments, and hopes. When naming the group dynamics, it might be helpful to stress individual variations, e.g. “individuals with more of the empiricist(theorist) outlook”
In most of society, arguments don’t do much to change beliefs. It’s better in more logical/rational/empirically leaning subcultures like LessWrong, but we shouldn’t assume we’re immune to emotions distorting our reasoning. Forceful arguments are often implicitly oppositional, confrontational, and insulting, and so have blowback effects that can entrench existing views and ignite tribal conflicts.
Science gets past this on average, given enough time. But the aphorism “science progresses one funeral at a time” should be chilling in this field.
We probably don’t have that long to solve alignment, so we’ve got to do better than traditional science. The alignment community is much more aware of and concerned with communication and emotional dynamics than the field I emigrated from, and probably from most other sciences. So I think we can do much better if we try.
Steve Byrnes’ Valence sequence is not directly about tribal dynamics, but it is indirectly quite relevant. It’s about the psychological mechanisms that tie idea, arguments, and group identities to emotional responses (it focuses on valence but the same steering system mechanisms apply to other specific emotional responses as well). It’s not a quick read, but it’s a fascinating lens for analyzing why we believe what we do.
I’m actually not sure what this refers to. E.g. when Boaz Barak’s posts spark some discussions it seems pretty civil and centered on the issue. The main disagreements don’t necessarily get resolved, but at least they were identified, and I didn’t get any serious signs of tribalism.
But maybe this is me skipping over the offending comments (I tend to ignore things that don’t feel intellectually interesting), or this is not an example of the dynamic that you refer to?
Perhaps, are there ways to make it easy for the groups you name to not necessarily be the group names the discussion settles on?