Curated. This post feels timely and warranted given the current climate. I think we, in our community, were already at some risk of throwing out our minds a decade ago, but it was less when it was easy to think timelines were 30-60 years. That allowed more time for play. Now as there’s so much evidence of imminence and there are more people doing more things, AI x-risk isn’t a side interest for many but a full-time occupation, yes, I think we’re almost colluding in creating a culture that doesn’t allow time for play. I like that this post makes the case for pushing back.
Further, this post points at something I want to reclaim for the spirit of LessWrong, something I feel like use to be more palpable than now. Random posts about There’s no such thing as a tree (phylogenetically) or random voting theory posts felt rooted in this kind play – the raw interest and curiosity of the author rather than some urgent importance of the topic. The concerns I have that make me want to boost the default prominence of rationality and world modeling posts (see LW Filter Tags (Rationality/World Modeling now promoted in Latest Posts) is not that I don’t like the AI posts, but in large part that I want to see more of the playful posts of yore.
Curated. This post feels timely and warranted given the current climate. I think we, in our community, were already at some risk of throwing out our minds a decade ago, but it was less when it was easy to think timelines were 30-60 years. That allowed more time for play. Now as there’s so much evidence of imminence and there are more people doing more things, AI x-risk isn’t a side interest for many but a full-time occupation, yes, I think we’re almost colluding in creating a culture that doesn’t allow time for play. I like that this post makes the case for pushing back.
Further, this post points at something I want to reclaim for the spirit of LessWrong, something I feel like use to be more palpable than now. Random posts about There’s no such thing as a tree (phylogenetically) or random voting theory posts felt rooted in this kind play – the raw interest and curiosity of the author rather than some urgent importance of the topic. The concerns I have that make me want to boost the default prominence of rationality and world modeling posts (see LW Filter Tags (Rationality/World Modeling now promoted in Latest Posts) is not that I don’t like the AI posts, but in large part that I want to see more of the playful posts of yore.