It seems like everyone is tired of hearing every other group’s opinions about AI. Since like 2005, Eliezer has been hearing people say a superintelligent AI surely won’t be clever, and has had enough. The average LW reader is tired of hearing obviously dumb Marc Andreessen accelerationist opinions. The average present harms person wants everyone to stop talking about the unrealistic apocalypse when artists are being replaced by shitty AI art. The average accelerationist wants everyone to stop talking about the unrealistic apocalypse when they could literally cure cancer and save Western civilization. The average NeurIPS author is sad that LLMs have made their expertise in Gaussian kernel wobblification irrelevant. Various subgroups of LW readers are dissatisfied with people who think reward is the optimization target, Eliezer is always right, or discussion is too tribal, or whatever.
With this combined with how Twitter distorts discourse is it any wonder that people need to process things as “oh, that’s just another claim by X group, time to dismiss”? Anyway I think naming the groups isn’t the problem, and so naming the groups in the post isn’t contributing to the problem much. The important thing to address is why people find it advantageous to track these groups.
fwiw this seems basically what’s happening to me. (the comment reads kinda defeatist about it, but, not entirely sure what you were going for, and the model seems right, if incomplete. [edit: I agree that several of the statements about entire groups are not literally true for the entire group, when I say ‘basically right’ I mean “the overall dynamic is an important gear, and I think among each group there’s a substantial chunk of people who are tired in the way Thomas depicts”])
On my own end, when I’m feeling most tribal-ish or triggered, it’s when someone/people are looking to me like they are “willfully not getting it”. And, I’ve noticed a few times on my end where I’m sort of willfully not getting it (sometimes while trying to do some kind of intellectual bridging, which I bet is particularly annoying).
I’m not currently optimistic about solving twitter.
The angle I felt most optimistic about on LW is aiming for a state where a few prominent-ish* people… feel like they get understood by each other at the same time, and can chill out at the same time. This maybe works IFF there are some people who:
a) aren’t completely burned out on the “try to communicate / actually have a good group epistemic culture about AI” project.
b) are prominent / intellectually-leader-y enough that, if they (a few people on multiple sides/angles of the AI-situation-issue), all chilled out at the same time, it’d meaningfully radiate out and give people more of a sense of “okay things are more chill now.”
c) they are willing to actually seriously doublecrux about it (i.e. having some actual open curiosity, both people trying to paraphrase/pass ITTs, both people trying to locate and articulate the cruxes beneath their cruxes, both people making an earnest effort to be open to changing their mind)
Shoulder Eliezer/Nate/JohnW/Rohin/Paul pop up to say “this has been tried, dude, for hundreds of hours”, and my response is
has it, really? The only person I ever saw seeming like they were trying to do the curious/actually-maybe-learn move was Richard Ngo in the MIRI Dialogues. (Probably there have been some other attempts in sporadic LW convos, but, not in a really focused-effort way, and not that I recall where at least one participant wasn’t sort of radiating “ugh I hope we can get this over with”)
I dunno dude I think we’ve actually made progress, at both articulating subtle underlying intellectual cruxes, and at improving our conversational technology.
(Maybe @Eli Tyre can say if he thinks True Doublecrux has ever been tried on this cluster of topics)
–––
...that was my angle something like 3 months ago. Since then, someone argued another key piece of the model:
There is something in the ecosystem that is going to keep generating prominent Things-Are-Probably-Okay people, no matter how much doublecruxing and changing minds happens. People in fact really want things to be okay, so whenever someone shows up with some kind of sophisticated sounding reasoning for why maybe things are okay, some kind of egregore will re-orient and elevate them to the position of “Things-Are-Probably-Okay-ish intellectual leader”. (There might separately be something that keeps generating Things-Probably-Aren’t-Okay people, maybe a la Val’s model here. I don’t think it tends to generate intellectual thought leaders, but, might be wrong).
If the true underlying reality turns out to be “actually, one should be more optimistic about alignment difficulty, or whether leading companies will do reasonable things by default”, then, hypothetically, it could resolve in the other direction. But, if there’s not a part of a plan that somehow deals with it, it makes sense for Not-Okay-ists to be less invested in actually trying.
–––
Relatedly: new people are going to keep showing up, who haven’t been through 100 hours of attempted nuanced arguing, who don’t get all the points, and people will keep being annoyed at them.
And there’s something like… intergenerational trauma, where the earlier generation of people who have attempted to communicate and are just completely fed up with people not listening, are often rude/dismissive of people who are still in the process of thinking through the issue (though, notably, sometimes while also still kinda willfully not listening), and then the newer generation is like “christ why was that guy so dismissive?”
In particular, the newer person might totally have a valid nontrivial point they are making, but it’s entangled with some other point the other person thinks is obviously dumb, so older Not-Okay-ists end up dismissing the whole thing.
–––
Originally this used “pessimist” and “optimist” as the shorthand, but, I decided I didn’t like that because it is easier to interpret as “optimism/pessimism as a disposition, rather than a property of your current beliefs”, which seemed to do more bad reifying)
Strong agree. @Raemon Maybe just have a button that says “I don’t want to debate your AI take and will tap out” that people can use whenever a non-AI conversation ends up steering into an AI conversation.
AI is a religious divide at this point, and people who consistently bring religious debates into casual conversation are correctly booted out of the social group. There is an approved place and time for it.
This was not my intention, though I could have been more careful. Here are my reasons
The original comment seemed really vague, in a way that often dooms conversations. Little progress can be made on most problems without pointing out the specific key reasons for them. The key point to make is that tribalism in this case doesn’t arise spontaneously based on identities alone, it has micro level causes which have macro level causes
I thought Ray’s wanted to discuss what to do for a broader communication strategy, so replying in shortform would be fine because the output would get >20x the views (this is where I could have realized LW shortform has a high profile now, and toned it down somehow), rather than open up the conversation here
I am also frustrated about tribalism and reporting from experience about what I notice in a somewhat exaggerated way. If there is defeatism this is the source, though I don’t think addressing it is impossible, I just don’t have any ideas
If people replied to me with object level twitterbrained comments about how eg everyone has to unite against Marc Andreessen I would be super sad. Hopefully we’re better than that.
The comment painted lots of groups of people with a broad brush, mainly associating them with a negative feelings that I don’t believe most of them experience very much or at all, it (I think?) implied it had named all the groups (while naming ~none of the groups I natively think of), and then said that naming groups wasn’t a problem to be worried about.
It’s not norm-violating, but IMO it was not a good move in the dance of “sanely discuss local politics”.
I’ve found it’s often actually better for group processing of ideas-and-emotions for there to be nonzero ranting about what you really feel in your heart even if not fully accurate. (This is also very risky, and having it be net-positive is tricky, but, often when I see people trying to dance around the venting you can feel it leaking through the veneer of politeness)
I think Thomas’s comment is slightly exaggerated but… idk basically correct in overall thrust, and an important gear? (I agree whenever you say “this group thinks X”, obviously lots of people in that group will not think X)
While the comment paints people with a negative brush, it does paint a bunch of different people in a negative brush, such that it’s more about painting the overall dynamic in a negative light than the individual people, in my reading.
I definitely agree it is not the best kind of comment Thomas could have written and I hope it’s not representative of the average quality of comment in this discussion, it just seemed to me the LW mod reactions to it were extreme and slightly isolated-demand-for-rigor-y.
(I do want this thread to be one where overall people are some kind of politically careful, but I don’t actually have that strong a guess as to what the best norms are. I view this as sort of the prelude to a later conversation with a better-set container)
I agree that ranting and emotional writing is part of a healthy processing of information in humans. Insofar as someone is ranting imprecisely and recklessly about groups and their attitudes, when trying to understand local politics, I wish that some care is taken to note that it’s not the default standard for comments, that it’s more likely to produce inaccuracies and be misleading, rather than to walk straight into it seemingly unaware of the line being crossed. It’s the standard thing about line crossing: the problem isn’t in choosing to cross the line, it’s in seemingly not being aware that there is a line at all.
I was aware there is some line but thought it was “don’t ignite a conversation that derails this one” rather than “don’t say inaccurate things about groups”, which is why I listed lots of groups rather than one and declined to list actively contentious topics like timelines, IABIED reviews, or Matthew Barnett’s opinions
I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
It’s not content free! I gave a bunch of examples of useful heuristics for navigating discussions of tribalism in my other comment, and Thomas did the opposite of most of them (named groups by things they didn’t like rather than things they stood for, avoid carelessly reifying group attitudes, avoid needlessly relitigating old conflicts).
I also don’t think it’s tone-policing. It’s about how we talk about a subject that humans are pretty biased about. Seems about as tone policing as if I said “let’s try to discuss the problem from many angles before proposing solutions” and then someone came in and proposed solutions and I pushed back on that. There is real advice about how to discuss difficult topics well and it’s not all centrally free speech violations.
I think the thing Zack meant was content-free was your response, to Thomas’ response, which didn’t actually explain the gears of why Thomas’ comment felt tramplingly bad.
I see! That makes sense. I hoped it was clear from the surrounding context in the thread, but I will endeavor in future to link to my comments elsethread for reference.
It seems like everyone is tired of hearing every other group’s opinions about AI. Since like 2005, Eliezer has been hearing people say a superintelligent AI surely won’t be clever, and has had enough. The average LW reader is tired of hearing obviously dumb Marc Andreessen accelerationist opinions. The average present harms person wants everyone to stop talking about the unrealistic apocalypse when artists are being replaced by shitty AI art. The average accelerationist wants everyone to stop talking about the unrealistic apocalypse when they could literally cure cancer and save Western civilization. The average NeurIPS author is sad that LLMs have made their expertise in Gaussian kernel wobblification irrelevant. Various subgroups of LW readers are dissatisfied with people who think reward is the optimization target, Eliezer is always right, or discussion is too tribal, or whatever.
With this combined with how Twitter distorts discourse is it any wonder that people need to process things as “oh, that’s just another claim by X group, time to dismiss”? Anyway I think naming the groups isn’t the problem, and so naming the groups in the post isn’t contributing to the problem much. The important thing to address is why people find it advantageous to track these groups.
fwiw this seems basically what’s happening to me. (the comment reads kinda defeatist about it, but, not entirely sure what you were going for, and the model seems right, if incomplete. [edit: I agree that several of the statements about entire groups are not literally true for the entire group, when I say ‘basically right’ I mean “the overall dynamic is an important gear, and I think among each group there’s a substantial chunk of people who are tired in the way Thomas depicts”])
On my own end, when I’m feeling most tribal-ish or triggered, it’s when someone/people are looking to me like they are “willfully not getting it”. And, I’ve noticed a few times on my end where I’m sort of willfully not getting it (sometimes while trying to do some kind of intellectual bridging, which I bet is particularly annoying).
I’m not currently optimistic about solving twitter.
The angle I felt most optimistic about on LW is aiming for a state where a few prominent-ish* people… feel like they get understood by each other at the same time, and can chill out at the same time. This maybe works IFF there are some people who:
a) aren’t completely burned out on the “try to communicate / actually have a good group epistemic culture about AI” project.
b) are prominent / intellectually-leader-y enough that, if they (a few people on multiple sides/angles of the AI-situation-issue), all chilled out at the same time, it’d meaningfully radiate out and give people more of a sense of “okay things are more chill now.”
c) they are willing to actually seriously doublecrux about it (i.e. having some actual open curiosity, both people trying to paraphrase/pass ITTs, both people trying to locate and articulate the cruxes beneath their cruxes, both people making an earnest effort to be open to changing their mind)
Shoulder Eliezer/Nate/JohnW/Rohin/Paul pop up to say “this has been tried, dude, for hundreds of hours”, and my response is
has it, really? The only person I ever saw seeming like they were trying to do the curious/actually-maybe-learn move was Richard Ngo in the MIRI Dialogues. (Probably there have been some other attempts in sporadic LW convos, but, not in a really focused-effort way, and not that I recall where at least one participant wasn’t sort of radiating “ugh I hope we can get this over with”)
I dunno dude I think we’ve actually made progress, at both articulating subtle underlying intellectual cruxes, and at improving our conversational technology.
(Maybe @Eli Tyre can say if he thinks True Doublecrux has ever been tried on this cluster of topics)
–––
...that was my angle something like 3 months ago. Since then, someone argued another key piece of the model:
There is something in the ecosystem that is going to keep generating prominent Things-Are-Probably-Okay people, no matter how much doublecruxing and changing minds happens. People in fact really want things to be okay, so whenever someone shows up with some kind of sophisticated sounding reasoning for why maybe things are okay, some kind of egregore will re-orient and elevate them to the position of “Things-Are-Probably-Okay-ish intellectual leader”. (There might separately be something that keeps generating Things-Probably-Aren’t-Okay people, maybe a la Val’s model here. I don’t think it tends to generate intellectual thought leaders, but, might be wrong).
If the true underlying reality turns out to be “actually, one should be more optimistic about alignment difficulty, or whether leading companies will do reasonable things by default”, then, hypothetically, it could resolve in the other direction. But, if there’s not a part of a plan that somehow deals with it, it makes sense for Not-Okay-ists to be less invested in actually trying.
–––
Relatedly: new people are going to keep showing up, who haven’t been through 100 hours of attempted nuanced arguing, who don’t get all the points, and people will keep being annoyed at them.
And there’s something like… intergenerational trauma, where the earlier generation of people who have attempted to communicate and are just completely fed up with people not listening, are often rude/dismissive of people who are still in the process of thinking through the issue (though, notably, sometimes while also still kinda willfully not listening), and then the newer generation is like “christ why was that guy so dismissive?”
In particular, the newer person might totally have a valid nontrivial point they are making, but it’s entangled with some other point the other person thinks is obviously dumb, so older Not-Okay-ists end up dismissing the whole thing.
–––
Originally this used “pessimist” and “optimist” as the shorthand, but, I decided I didn’t like that because it is easier to interpret as “optimism/pessimism as a disposition, rather than a property of your current beliefs”, which seemed to do more bad reifying)
Strong agree. @Raemon Maybe just have a button that says “I don’t want to debate your AI take and will tap out” that people can use whenever a non-AI conversation ends up steering into an AI conversation.
AI is a religious divide at this point, and people who consistently bring religious debates into casual conversation are correctly booted out of the social group. There is an approved place and time for it.
I feel like Ray was trying to open up this conversation with respect and carefulness and then you kind of trampled over that.
This was not my intention, though I could have been more careful. Here are my reasons
The original comment seemed really vague, in a way that often dooms conversations. Little progress can be made on most problems without pointing out the specific key reasons for them. The key point to make is that tribalism in this case doesn’t arise spontaneously based on identities alone, it has micro level causes which have macro level causes
I thought Ray’s wanted to discuss what to do for a broader communication strategy, so replying in shortform would be fine because the output would get >20x the views (this is where I could have realized LW shortform has a high profile now, and toned it down somehow), rather than open up the conversation here
I am also frustrated about tribalism and reporting from experience about what I notice in a somewhat exaggerated way. If there is defeatism this is the source, though I don’t think addressing it is impossible, I just don’t have any ideas
If people replied to me with object level twitterbrained comments about how eg everyone has to unite against Marc Andreessen I would be super sad. Hopefully we’re better than that.
fwiw I thought Thomas’ comment was fine, if a bit defeatist-feeling.
The comment painted lots of groups of people with a broad brush, mainly associating them with a negative feelings that I don’t believe most of them experience very much or at all, it (I think?) implied it had named all the groups (while naming ~none of the groups I natively think of), and then said that naming groups wasn’t a problem to be worried about.
It’s not norm-violating, but IMO it was not a good move in the dance of “sanely discuss local politics”.
A few reasons I don’t mind the Thomas comment:
I’ve found it’s often actually better for group processing of ideas-and-emotions for there to be nonzero ranting about what you really feel in your heart even if not fully accurate. (This is also very risky, and having it be net-positive is tricky, but, often when I see people trying to dance around the venting you can feel it leaking through the veneer of politeness)
I think Thomas’s comment is slightly exaggerated but… idk basically correct in overall thrust, and an important gear? (I agree whenever you say “this group thinks X”, obviously lots of people in that group will not think X)
While the comment paints people with a negative brush, it does paint a bunch of different people in a negative brush, such that it’s more about painting the overall dynamic in a negative light than the individual people, in my reading.
I definitely agree it is not the best kind of comment Thomas could have written and I hope it’s not representative of the average quality of comment in this discussion, it just seemed to me the LW mod reactions to it were extreme and slightly isolated-demand-for-rigor-y.
(I do want this thread to be one where overall people are some kind of politically careful, but I don’t actually have that strong a guess as to what the best norms are. I view this as sort of the prelude to a later conversation with a better-set container)
I think the dynamic Thomas pointed to is more helpful and accurate than the specifics, which seem to me like inaccurate glosses.
I agree that ranting and emotional writing is part of a healthy processing of information in humans. Insofar as someone is ranting imprecisely and recklessly about groups and their attitudes, when trying to understand local politics, I wish that some care is taken to note that it’s not the default standard for comments, that it’s more likely to produce inaccuracies and be misleading, rather than to walk straight into it seemingly unaware of the line being crossed. It’s the standard thing about line crossing: the problem isn’t in choosing to cross the line, it’s in seemingly not being aware that there is a line at all.
I was aware there is some line but thought it was “don’t ignite a conversation that derails this one” rather than “don’t say inaccurate things about groups”, which is why I listed lots of groups rather than one and declined to list actively contentious topics like timelines, IABIED reviews, or Matthew Barnett’s opinions
I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
It’s not content free! I gave a bunch of examples of useful heuristics for navigating discussions of tribalism in my other comment, and Thomas did the opposite of most of them (named groups by things they didn’t like rather than things they stood for, avoid carelessly reifying group attitudes, avoid needlessly relitigating old conflicts).
I also don’t think it’s tone-policing. It’s about how we talk about a subject that humans are pretty biased about. Seems about as tone policing as if I said “let’s try to discuss the problem from many angles before proposing solutions” and then someone came in and proposed solutions and I pushed back on that. There is real advice about how to discuss difficult topics well and it’s not all centrally free speech violations.
I think the thing Zack meant was content-free was your response, to Thomas’ response, which didn’t actually explain the gears of why Thomas’ comment felt tramplingly bad.
I see! That makes sense. I hoped it was clear from the surrounding context in the thread, but I will endeavor in future to link to my comments elsethread for reference.