Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed forces.
The middle sphere contains communities focused on spreading important ideas and doing so by motivating a rationalist discourse (for example, ACX, Asterisk Magazine, or Vox’s Future Perfect). It aims, in other words, for this capacity to sway public opinion, to make key ideas enter popular discussion.
And the inner sphere is LessWrong, which shares the same aims as the middle sphere, and in addition is the main source of generation of ideas and patterns of thought. Some of these ideas (hopefully a concern for AI alignment, awareness of the control problem, or Bayesianism, for instance) will eventually trickle down to the general public; others, such as technical topics related to AI safety, don’t need to go down to that level because they belong to the higher end of the spectrum which is directly working to solve these issues.
So I very much agree with the vision to maintain LW as a sort of university, with high entry barriers in order to produce refined, high-quality ideas and debates, while at the same time keeping in mind that for some of these ideas to make a difference, they need to trickle down and reach the public debate.
Could we take from Eliezer’s message the need to redirect more efforts into AI policy and into widening the Overton window to try, in any way we can, to give AI safety research the time it needs? As Raemon said, the Overton window might be widening already, making more ideas “acceptable” for discussion, but it doesn’t seem enough. I would say the typical response from the the overwhelming majority of the population and world leaders to misaligned AGI concerns still is to treat them as a panicky sci-fi dystopia rather than to say “maybe we should stop everything we’re doing and not build AGI”.
I’m wondering if not addressing AI policy sufficiently might be a coordination failure from the AI alignment community; i.e. from an individual perspective, the best option for a person who wants to reduce existential risks probably is to do technical AI safety work rather than AI policy work, because AI policy and advocacy work is most effective when done by a large number of people, to shift public opinion and the Overton window. Plus it’s extremely hard to make yourself heard and influence entire governments, due to the election cycles, incentives, short-term thinking, bureaucracy...that govern politics.
Maybe, now that AI is starting to cause turmoil and enter popular debate, it’s time to seize this wave and improve the coordination of the AI community. The main issue is not whether a solution to AI alignment is possible, but whether there will be enough time to come up with one. And the biggest factors that can affect the timelines probably are (1) big corporations and governments, and (2) how many people work on AI safety.