You seem to be in the mindset that everything is as EY/LW says … but there is precious little evidence for that outside the echo chamber.
An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
Is there evidence that extreme rationality works? (against ) Is there evidence that the people with real achivemnents—LeCunn, Ng etc—are actually crippled by lack of rationality?
Can you teach alignment separately from AI? (cf can you teach safety engineering to someone who doesn’t know engineering?)
Indefinitely-long-timespan basic minimum income for everyone who is working on solely AI alignment.
How do you separate people who are actually working on alignment from scammers? How do you motivate them to produce results with an unconditional, indefinite term payment? Would minimum income be enough to allow them to buy equipment and hire assistants? (Of course, all these problems are solved by conventional research grants).
Again, you seem to be making the High Rationalist assumption that alignment is a matter of some unqualified person sitting in a chair thinking, not of a qualified person doing practical work.
You seem to be in the mindset that everything is as EY/LW says … but there is precious little evidence for that outside the echo chamber.
Is there evidence that extreme rationality works? (against ) Is there evidence that the people with real achivemnents—LeCunn, Ng etc—are actually crippled by lack of rationality? Can you teach alignment separately from AI? (cf can you teach safety engineering to someone who doesn’t know engineering?)
How do you separate people who are actually working on alignment from scammers? How do you motivate them to produce results with an unconditional, indefinite term payment? Would minimum income be enough to allow them to buy equipment and hire assistants? (Of course, all these problems are solved by conventional research grants).
Again, you seem to be making the High Rationalist assumption that alignment is a matter of some unqualified person sitting in a chair thinking, not of a qualified person doing practical work.