You strong disagree downvoted my comment, but it’s still not clear to me that you actually disagree with my core claim. I’m not making a claim about priors, or whether it’s reasonable to think that p(doom) might be non-negligible a priori.
My point is instead about whether the specific technical details of deep learning today are ultimately what’s driving some people’s high probability estimates of AI doom. If the intuition behind these high estimates could’ve been provided in the 19th century (without modern ML insights), then modern technical arguments don’t seem to be the real crux.
Therefore, while you might be correct about priors regarding p(doom), or whether existing evidence reinforces high concern for AI doom, these points seem separate from my core claim about the primary motivating intuitions behind a strong belief in AI doom.
(To clarify, I strong disagree voted, I haven’t downvoted at all—I still strongly disagree)
I am confused and feel like I must be misunderstanding your point. It feels like you’re attempting a “gotcha” argument, but I don’t understand your point or who you’re trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that “people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y”. But that seems to be your argument? For A=high doom, X=weird 19th century intuition, Y=actually good technical reasons grounded in modern ML. What am I missing? If you want to argue that someone else really believes bad reason X, you need to engage with specific details of that person and why you believe they are saying false things about their beliefs.
I could easily flip this argument. In the 19th century, I’m sure people said machines could never possibly be dangerous—“God will protect us” or “They are tools, and tools are always subservient to man.” or “They will never have a soul, and so can never be truly dangerous.”. This is a raw, intuition-backed argument. People today who claim to believe that AI will be safe for sophisticated technical reasons could have held these same beliefs in the 19th century, which suggests they are being dishonest. Why does your argument hold, but mine break?
I also don’t actually know which people you want to criticize. My sense is that many community members with high p(doom), like Yudkowsky, developed these views 10-20 years ago and haven’t substantially updated since, so obviously they can’t come from nuanced views of modern ML. As far as I am aware they don’t seem to claim their beliefs are heavily driven by sophisticated technical reasons about current ML systems—they simply maintain their existing views. It still seems a strawman to call views formed without specific technical grounding “raw intuition-backed reactions to the idea of mechanical minds”. Like, regardless of how much you agree, “Superintelligence” clearly makes a much more sophisticated case than you imply, while predating deep learning.
I’m not actually aware of anyone who claims to be afraid of just current ML systems due to specific technical reasons. The reasons for being afraid are pretty obvious, but there are very specific facts about these systems that can adjust them. Now that modern deep learning exists, some of these concerns seem validated, while others seem less significant, and new issues have arisen. This seems completely normal and exactly what you would expect? My personal view is that we should be moderately but not extremely concerned about Doom. I understand modern machine learning well, and it hasn’t substantially shifted my position in either direction. The large language model paradigm somewhat increased my optimism about safety, while the shift toward long-horizon RL somewhat increased my concern about Doom, though this development was expected eventually.
Can you give some concrete examples of specific people/public statements that you are trying to criticise here? That might help ground out this disagreement.
I am confused and feel like I must be misunderstanding your point. It feels like you’re attempting a “gotcha” argument, but I don’t understand your point or who you’re trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that “people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y”. But that seems to be your argument?
I think you’re overinterpreting my comment and attributing to me the least charitable plausible interpretation of what I wrote (along with most other people commenting and voting in this thread. As a general rule that I’ve learned from my time in online communities, whenever someone makes a claim on an online forum that indicates a rejection of a belief central to that forum’s philosophy, people tend to reply to that person by ruthlessly assuming the most foolish plausible interpretation of their remarks. LessWrong is no exception.)
My actual position is simply this: if the core arguments for AI doom could have genuinely been presented and anticipated in the 19th century, then the crucial factor that actually determines whether most “AI doomers” believe in AI doom is probably something relatively abstract or philosophical, rather than specific technical arguments grounded in the details of machine learning. This does not imply that technical arguments are irrelevant, it just means they’re probably not as cruxy to whether people actually believe that doom is probable or not.
(Also to be clear, unless otherwise indicated, in this thread I am using “belief in AI doom” as shorthand for “belief that AI doom is more likely than not” rather than “belief that AI doom is possible and at least a little bit plausible, so therefore worth worrying about.” I think these two views should generally be distinguished.)
(To clarify, I strong disagree voted, I haven’t downvoted at all—I still strongly disagree)
Oops, I recognize that, I just misstated it in my original comment.
Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.
You strong disagree downvoted my comment, but it’s still not clear to me that you actually disagree with my core claim. I’m not making a claim about priors, or whether it’s reasonable to think that p(doom) might be non-negligible a priori.
My point is instead about whether the specific technical details of deep learning today are ultimately what’s driving some people’s high probability estimates of AI doom. If the intuition behind these high estimates could’ve been provided in the 19th century (without modern ML insights), then modern technical arguments don’t seem to be the real crux.
Therefore, while you might be correct about priors regarding p(doom), or whether existing evidence reinforces high concern for AI doom, these points seem separate from my core claim about the primary motivating intuitions behind a strong belief in AI doom.
(To clarify, I strong disagree voted, I haven’t downvoted at all—I still strongly disagree)
I am confused and feel like I must be misunderstanding your point. It feels like you’re attempting a “gotcha” argument, but I don’t understand your point or who you’re trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that “people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y”. But that seems to be your argument? For A=high doom, X=weird 19th century intuition, Y=actually good technical reasons grounded in modern ML. What am I missing? If you want to argue that someone else really believes bad reason X, you need to engage with specific details of that person and why you believe they are saying false things about their beliefs.
I could easily flip this argument. In the 19th century, I’m sure people said machines could never possibly be dangerous—“God will protect us” or “They are tools, and tools are always subservient to man.” or “They will never have a soul, and so can never be truly dangerous.”. This is a raw, intuition-backed argument. People today who claim to believe that AI will be safe for sophisticated technical reasons could have held these same beliefs in the 19th century, which suggests they are being dishonest. Why does your argument hold, but mine break?
I also don’t actually know which people you want to criticize. My sense is that many community members with high p(doom), like Yudkowsky, developed these views 10-20 years ago and haven’t substantially updated since, so obviously they can’t come from nuanced views of modern ML. As far as I am aware they don’t seem to claim their beliefs are heavily driven by sophisticated technical reasons about current ML systems—they simply maintain their existing views. It still seems a strawman to call views formed without specific technical grounding “raw intuition-backed reactions to the idea of mechanical minds”. Like, regardless of how much you agree, “Superintelligence” clearly makes a much more sophisticated case than you imply, while predating deep learning.
I’m not actually aware of anyone who claims to be afraid of just current ML systems due to specific technical reasons. The reasons for being afraid are pretty obvious, but there are very specific facts about these systems that can adjust them. Now that modern deep learning exists, some of these concerns seem validated, while others seem less significant, and new issues have arisen. This seems completely normal and exactly what you would expect? My personal view is that we should be moderately but not extremely concerned about Doom. I understand modern machine learning well, and it hasn’t substantially shifted my position in either direction. The large language model paradigm somewhat increased my optimism about safety, while the shift toward long-horizon RL somewhat increased my concern about Doom, though this development was expected eventually.
Can you give some concrete examples of specific people/public statements that you are trying to criticise here? That might help ground out this disagreement.
I think you’re overinterpreting my comment and attributing to me the least charitable plausible interpretation of what I wrote (along with most other people commenting and voting in this thread. As a general rule that I’ve learned from my time in online communities, whenever someone makes a claim on an online forum that indicates a rejection of a belief central to that forum’s philosophy, people tend to reply to that person by ruthlessly assuming the most foolish plausible interpretation of their remarks. LessWrong is no exception.)
My actual position is simply this: if the core arguments for AI doom could have genuinely been presented and anticipated in the 19th century, then the crucial factor that actually determines whether most “AI doomers” believe in AI doom is probably something relatively abstract or philosophical, rather than specific technical arguments grounded in the details of machine learning. This does not imply that technical arguments are irrelevant, it just means they’re probably not as cruxy to whether people actually believe that doom is probable or not.
(Also to be clear, unless otherwise indicated, in this thread I am using “belief in AI doom” as shorthand for “belief that AI doom is more likely than not” rather than “belief that AI doom is possible and at least a little bit plausible, so therefore worth worrying about.” I think these two views should generally be distinguished.)
Oops, I recognize that, I just misstated it in my original comment.
Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.