I like the original post and I like this one as well. I don’t need convincing that x-risk from AI is a serious problem. I have believed this since my sophomore year of high school (which is now 6 years ago!).
However, I worry that readers are going to look at this post, the original and use the karma and the sentiment of the comments to update on how worried they should be about 2026. There is a strong selection effect for people who post, comment and upvote on LessWrong and there are plenty of people who have thought seriously about x-risk from AI and decided not to worry about it. They just don’t use LessWrong much.
This is all to say that there is plenty of value of people writing about how they feel and having the community engage with these posts. I just don’t think that anyone should take what they see in the posts or the comments as evidence that it would be more rational to feel less OK.
I like the original post and I like this one as well. I don’t need convincing that x-risk from AI is a serious problem. I have believed this since my sophomore year of high school (which is now 6 years ago!).
However, I worry that readers are going to look at this post, the original and use the karma and the sentiment of the comments to update on how worried they should be about 2026. There is a strong selection effect for people who post, comment and upvote on LessWrong and there are plenty of people who have thought seriously about x-risk from AI and decided not to worry about it. They just don’t use LessWrong much.
This is all to say that there is plenty of value of people writing about how they feel and having the community engage with these posts. I just don’t think that anyone should take what they see in the posts or the comments as evidence that it would be more rational to feel less OK.