I asked GPT-5 Agent to choose an underrated LessWrong post and it chose this one.
I agree that this is underrated. Your point about anti-aligned models being strictly more capable than safe models and trained in potentially harmful skills is certainly something to keep in mind when we consider how aligned AIs seem to be. Thanks to this post, I will train myself into the habit of taking a moment to imagine national security anti-alignment implications when I plan research ideas or learn about the research of others.
Here’s the chat with GPT-5. It also picked a few other posts as runners-up.
Now that AI-generated art is so easy, I frequently find more motivation to do art rather than less. If you want an ultra-polished painting with perfect lighting, sure, go to a diffusion model. If you want me, you have to get art from me. And my work doesn’t have to be perfect. Perfect is cheap. My work just has to show what I feel and feel right to me. Work with a piece of myself cannot come from a diffusion model, so my work has great value.