this is a “negative” post with hundreds of upvotes and meaningful discussion in the comments. The different between your post and this one is not the “level of criticism”, but the quality and logical basis coming from the argument. I agree with Seth Herds argument from the comments of your post re the difference here, can’t figure out how to link it. There are many fair criticisms of lesswrong culture, but “biased” and “echochamber” are not among them in my experience. I don’t mean to attack your character, writing skills, or general opinions, as I’m sure you are capable of writing something of higher quality that better expresses your thoughts and opinions.
You’ll note that the negative post you linked is negative about AI timelines (“AI timelines are longer than many think”), while OP’s is negative about AI doom being an issue (“I’m probably going to move from ~5% doom to ~1% doom.”)
strong disagree, see https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress
this is a “negative” post with hundreds of upvotes and meaningful discussion in the comments. The different between your post and this one is not the “level of criticism”, but the quality and logical basis coming from the argument. I agree with Seth Herds argument from the comments of your post re the difference here, can’t figure out how to link it. There are many fair criticisms of lesswrong culture, but “biased” and “echochamber” are not among them in my experience. I don’t mean to attack your character, writing skills, or general opinions, as I’m sure you are capable of writing something of higher quality that better expresses your thoughts and opinions.
You’ll note that the negative post you linked is negative about AI timelines (“AI timelines are longer than many think”), while OP’s is negative about AI doom being an issue (“I’m probably going to move from ~5% doom to ~1% doom.”)
Agree the above post is a weak-ish example. This post feels like a better example: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
This feels like weak evidence against my point, though I think “timelines” and “overall AI risk” are different levels of safe to argue about.