I didn’t pay this post much attention when it came out. But rereading it now I find many parts of it insightful, including the description of streetlighting, the identification of the EA recruiting pipeline as an issue, and the “flinching away” model. And of course it’s a “big if true” post, because it’s very important for the field to be healthy.
I’m giving it +4 instead of +9 because I think that there’s something implicitly backchainy about John’s frame (you need to confront the problem without flinching away from it). But I also think you can do great alignment work by following your curiosity and research taste, if those are well-developed enough, without directly trying to “solve the problem”. And so even the identification of alignment as being a field aimed at solving a given big problem, rather than a field aimed at developing a deep scientific understanding, is somewhat counterproductive IMO.
I didn’t pay this post much attention when it came out. But rereading it now I find many parts of it insightful, including the description of streetlighting, the identification of the EA recruiting pipeline as an issue, and the “flinching away” model. And of course it’s a “big if true” post, because it’s very important for the field to be healthy.
I’m giving it +4 instead of +9 because I think that there’s something implicitly backchainy about John’s frame (you need to confront the problem without flinching away from it). But I also think you can do great alignment work by following your curiosity and research taste, if those are well-developed enough, without directly trying to “solve the problem”. And so even the identification of alignment as being a field aimed at solving a given big problem, rather than a field aimed at developing a deep scientific understanding, is somewhat counterproductive IMO.