An interesting litmus test for groupthink might be: What has LW changed its collective mind about? By that I mean: the topic was discussed on LW, there was a particular position on the issue that was held by the majority of users, new evidence/arguments came in, and now there’s a different position which is held by the majority of users. I’m a bit concerned that nothing comes to mind which meets these criteria? I’m not sure it has much to do with weighted voting because I can’t think of anything from LW 1.0 either.
Replication Crisis definitely hit hard. Lots of stuff there.
People’s timelines have changed quite a bit. People used to plan for 50-60 years, now it’s much more like 20-30 years.
Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
There were a lot of shifts on the spectrum “just do explicit reasoning for everything” to “figuring out how to interface with your System 1 sure seems really important”. I think Eliezer was mostly ahead of the curve here, and early on in LessWrong’s lifetime we kind of fell prey to following our own stereotypes.
A lot of EA related stuff. Like, there is now a lot of good analysis and thinking about how to maximize impact, and if you read old EA-adjacent discussions, they sure strike me as getting a ton of stuff wrong.
Spaced repetition. I think the pendulum on this swung somewhat too far, but I think people used to be like “yeah, spaced repetition is just really great and you should use it for everything” and these days the consensus is more like “use spaced repetition in a bunch of narrow contexts, but overall memorizing stuff isn’t that great”. I do actually think rationalists are currently underusing spaced repetition, but overall I feel like there was a large shift here.
Nootropics. I feel like in the past many more people were like “you should take this whole stack of drugs to make you smarter”. I see that advice a lot less, and would advise many fewer people to follow that advice, though not actually sure how much I reflectively endorse that.
A bunch of AI Alignment stuff in the space of “don’t try to solve the AI Alignment problem directly, instead try to build stuff that doesn’t really want to achieve goals in a coherent sense and use that to stabilize the situation”. I think this was kind of similar to the S1 stuff, where Eliezer seemed ahead of the curve, but the community consensus was kind of behind.
I feel like there was a mass community movement (not unanimous but substantial) from AGI-scenarios-that-Eliezer-has-in-mind to AGI-scenarios-that-Paul-has-in-mind, e.g. more belief in slow takeoff + multipolar + “What Failure Looks Like” and less belief in fast takeoff + decisive strategic advantage + recursive self-improvement + powerful agents coherently pursuing misaligned goals. This was mostly before my time, I could be misreading things, that’s just my impression. :-)
Seems true. Notably, if I have my cynical hat on (and I think I probably do?) it depended on having Paul say a bunch of things about it, and Paul had previously also established himself as a local “thinker celebrity”.
If I have my somewhat less cynical hat on, I do honestly think our status gradients do a decent job of tracking “person who is actually good at figuring things out”, such that “local thinker celebrity endorses a thing” is not just crazy, it’s a somewhat reasonable filtering mechanism. But I do think the effect is real.
Related to the discussion of weighted voting allegedly facilitating groupthink earlier https://www.lesswrong.com/posts/kxhmiBJs6xBxjEjP7/weighted-voting-delenda-est
An interesting litmus test for groupthink might be: What has LW changed its collective mind about? By that I mean: the topic was discussed on LW, there was a particular position on the issue that was held by the majority of users, new evidence/arguments came in, and now there’s a different position which is held by the majority of users. I’m a bit concerned that nothing comes to mind which meets these criteria? I’m not sure it has much to do with weighted voting because I can’t think of anything from LW 1.0 either.
Replication Crisis definitely hit hard. Lots of stuff there.
People’s timelines have changed quite a bit. People used to plan for 50-60 years, now it’s much more like 20-30 years.
Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
There were a lot of shifts on the spectrum “just do explicit reasoning for everything” to “figuring out how to interface with your System 1 sure seems really important”. I think Eliezer was mostly ahead of the curve here, and early on in LessWrong’s lifetime we kind of fell prey to following our own stereotypes.
A lot of EA related stuff. Like, there is now a lot of good analysis and thinking about how to maximize impact, and if you read old EA-adjacent discussions, they sure strike me as getting a ton of stuff wrong.
Spaced repetition. I think the pendulum on this swung somewhat too far, but I think people used to be like “yeah, spaced repetition is just really great and you should use it for everything” and these days the consensus is more like “use spaced repetition in a bunch of narrow contexts, but overall memorizing stuff isn’t that great”. I do actually think rationalists are currently underusing spaced repetition, but overall I feel like there was a large shift here.
Nootropics. I feel like in the past many more people were like “you should take this whole stack of drugs to make you smarter”. I see that advice a lot less, and would advise many fewer people to follow that advice, though not actually sure how much I reflectively endorse that.
A bunch of AI Alignment stuff in the space of “don’t try to solve the AI Alignment problem directly, instead try to build stuff that doesn’t really want to achieve goals in a coherent sense and use that to stabilize the situation”. I think this was kind of similar to the S1 stuff, where Eliezer seemed ahead of the curve, but the community consensus was kind of behind.
I feel like there was a mass community movement (not unanimous but substantial) from AGI-scenarios-that-Eliezer-has-in-mind to AGI-scenarios-that-Paul-has-in-mind, e.g. more belief in slow takeoff + multipolar + “What Failure Looks Like” and less belief in fast takeoff + decisive strategic advantage + recursive self-improvement + powerful agents coherently pursuing misaligned goals. This was mostly before my time, I could be misreading things, that’s just my impression. :-)
Seems true. Notably, if I have my cynical hat on (and I think I probably do?) it depended on having Paul say a bunch of things about it, and Paul had previously also established himself as a local “thinker celebrity”.
If I have my somewhat less cynical hat on, I do honestly think our status gradients do a decent job of tracking “person who is actually good at figuring things out”, such that “local thinker celebrity endorses a thing” is not just crazy, it’s a somewhat reasonable filtering mechanism. But I do think the effect is real.
Priming? Though that does feel like a fairly week example.
Sequences: Beisutsukai
One year later: Extreme Rationality: It’s Not That Great