On the other hand this post for me exemplifies something I think LessWrong is really good at, which is creating a place where people can find an audience of bleeding-edge research that is not obviously off the rails. Something like the kind of stuff you would otherwise only hear about because you work at a university and happen to go to a talk a person working on that research gave internally to solicit feedback.
Like with many posts the audience may be small for this one, but this is the same problem with many AI alignment posts and I don’t think we should hold it against this post in voting unless we plan to also vote against inclusion of most technical AI posts that were nominated.
On the other hand this post for me exemplifies something I think LessWrong is really good at, which is creating a place where people can find an audience of bleeding-edge research that is not obviously off the rails. Something like the kind of stuff you would otherwise only hear about because you work at a university and happen to go to a talk a person working on that research gave internally to solicit feedback.
Like with many posts the audience may be small for this one, but this is the same problem with many AI alignment posts and I don’t think we should hold it against this post in voting unless we plan to also vote against inclusion of most technical AI posts that were nominated.