I mean, I just think the post isn’t very good. It’s kinda funny, but it’s not THAT funny, and also it’s not the kind of funny that is appropriate for curation. Overall I think curating it wasted a bunch of people’s time and somewhat eroded the commons.
But, I also don’t expect to convince you personally? Because you are a trickster archetype. Trickster archetypes have a valuable place in society but that place is not in choosing curated posts for LW. So the people I am petitioning to un-curate this are the other LW mods.
I’m sure you’ve had lots of discussion about this; why the label “AI alignment”?
I think “alignment” refers to the somewhat specific task of aligning an AIs values to human values. But my understanding of your actual scope is more like “theoretical AI safety”. A lot of foundational work is done with the intention that it will eventually help with alignment, but which definitely isn’t about alignment, and a lot of theoretical AI safety work isn’t about alignment per se at all. For example, some of my research problems are trying to understand which types of AI systems are not dangerous, not because their values are aligned with ours, but because they’re not unrestrained consequentialists.