Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed forces.
The middle sphere contains communities focused on spreading important ideas and doing so by motivating a rationalist discourse (for example, ACX, Asterisk Magazine, or Vox’s Future Perfect). It aims, in other words, for this capacity to sway public opinion, to make key ideas enter popular discussion.
And the inner sphere is LessWrong, which shares the same aims as the middle sphere, and in addition is the main source of generation of ideas and patterns of thought. Some of these ideas (hopefully a concern for AI alignment, awareness of the control problem, or Bayesianism, for instance) will eventually trickle down to the general public; others, such as technical topics related to AI safety, don’t need to go down to that level because they belong to the higher end of the spectrum which is directly working to solve these issues.
So I very much agree with the vision to maintain LW as a sort of university, with high entry barriers in order to produce refined, high-quality ideas and debates, while at the same time keeping in mind that for some of these ideas to make a difference, they need to trickle down and reach the public debate.
As others have pointed out, there’s a difference between a) problems to be tackled for the sake of the solution, vs b) problems to be tackled for the sake (or fun) of the problem. Humans like challenges and puzzles and to solve things themselves rather than having the answers handed down to them. Global efforts to fight cancer can be inspiring, and I would guess a motivation for most medical researchers is their own involvement in this same process. But if we could push a button to eliminate cancer forever, no sane person would refuse to.
I think we should aim to have all a) solved asap (at least those problems above a certain threshold of importance), and maintain b). At the same time, I suspect that the value we attach to b) also bears some relation to the importance of the solution to those problems. E.g. that a theoretical problem can be much more immersive, and eventually rewarding, when the whole of civilisation is at stake, than when it’s a trivial puzzle.
So I wonder how to maintain b) once the important solutions can be provided much faster and easily by another entity or superintelligence. Maybe with fully immersive similations that reproduce e.g. the situation and experience of trying to find a cure to cancer, or with large-scale puzzles (such as scape rooms) but which are not life-or-death (nor happiness-or-suffering).