I explained in a comment on that post what’s wrong with it by LW standards.
The general question is worth investigating, but making that post the example for consideration isn’t a good idea. It was neither that good nor that rationalist/rational in focus, the two things that qualify posts for LW standards and objectives.
Making a more general argument for the bias of LW would be a great way to earn some upvotes. We’re suckers for analyzing our own biases.
The problem is it’s hard to distinguish between being biased, and just being more rational and more right. Most arguments for AI doom suck. Most arguments against AI doom also suck. Most of the arguments that don’t suck land you at way, way above the 1-5% doom you quoted. BUT of course we could just be mistaking massive amounts of thought and analysis for bias. Or not. It’s an open question.
It’s tricky to approach the meta-question without approaching the object level.
It could just be that LW doesn’t like most arguments against AGI x-risk because people only make those arguments before they’ve considered the whole question, so they tend to not be very rational.
I’ve tried to steelman arguments against, and I can’t get anywhere near “oh yeah this should be fine” without leaving out huge chunks of the question and likely futures. In particular, if I ONLY think of AI as LLMs, I get those low doom probabilities—but we’re just obviously not going to stop there without some remarkable changes.
The best I can get is something like “people tend to worry a lot, and people tend to solve problems pretty well once they’re close and so seem important”. That might get me down to like 10% if I’m feeling super optimistic. That’s if I’m considering the whole problem: we’re making a new alien species that will be way smarter than us.
I explained in a comment on that post what’s wrong with it by LW standards.
The general question is worth investigating, but making that post the example for consideration isn’t a good idea. It was neither that good nor that rationalist/rational in focus, the two things that qualify posts for LW standards and objectives.
Making a more general argument for the bias of LW would be a great way to earn some upvotes. We’re suckers for analyzing our own biases.
The problem is it’s hard to distinguish between being biased, and just being more rational and more right. Most arguments for AI doom suck. Most arguments against AI doom also suck. Most of the arguments that don’t suck land you at way, way above the 1-5% doom you quoted. BUT of course we could just be mistaking massive amounts of thought and analysis for bias. Or not. It’s an open question.
It’s tricky to approach the meta-question without approaching the object level.
It could just be that LW doesn’t like most arguments against AGI x-risk because people only make those arguments before they’ve considered the whole question, so they tend to not be very rational.
I’ve tried to steelman arguments against, and I can’t get anywhere near “oh yeah this should be fine” without leaving out huge chunks of the question and likely futures. In particular, if I ONLY think of AI as LLMs, I get those low doom probabilities—but we’re just obviously not going to stop there without some remarkable changes.
The best I can get is something like “people tend to worry a lot, and people tend to solve problems pretty well once they’re close and so seem important”. That might get me down to like 10% if I’m feeling super optimistic. That’s if I’m considering the whole problem: we’re making a new alien species that will be way smarter than us.