I am confused and feel like I must be misunderstanding your point. It feels like you’re attempting a “gotcha” argument, but I don’t understand your point or who you’re trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that “people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y”. But that seems to be your argument?
I think you’re overinterpreting my comment and attributing to me the least charitable plausible interpretation of what I wrote (along with most other people commenting and voting in this thread. As a general rule that I’ve learned from my time in online communities, whenever someone makes a claim on an online forum that indicates a rejection of a belief central to that forum’s philosophy, people tend to reply to that person by ruthlessly assuming the most foolish plausible interpretation of their remarks. LessWrong is no exception.)
My actual position is simply this: if the core arguments for AI doom could have genuinely been presented and anticipated in the 19th century, then the crucial factor that actually determines whether most “AI doomers” believe in AI doom is probably something relatively abstract or philosophical, rather than specific technical arguments grounded in the details of machine learning. This does not imply that technical arguments are irrelevant, it just means they’re probably not as cruxy to whether people actually believe that doom is probable or not.
(Also to be clear, unless otherwise indicated, in this thread I am using “belief in AI doom” as shorthand for “belief that AI doom is more likely than not” rather than “belief that AI doom is possible and at least a little bit plausible, so therefore worth worrying about.” I think these two views should generally be distinguished.)
(To clarify, I strong disagree voted, I haven’t downvoted at all—I still strongly disagree)
Oops, I recognize that, I just misstated it in my original comment.
Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.
I think you’re overinterpreting my comment and attributing to me the least charitable plausible interpretation of what I wrote (along with most other people commenting and voting in this thread. As a general rule that I’ve learned from my time in online communities, whenever someone makes a claim on an online forum that indicates a rejection of a belief central to that forum’s philosophy, people tend to reply to that person by ruthlessly assuming the most foolish plausible interpretation of their remarks. LessWrong is no exception.)
My actual position is simply this: if the core arguments for AI doom could have genuinely been presented and anticipated in the 19th century, then the crucial factor that actually determines whether most “AI doomers” believe in AI doom is probably something relatively abstract or philosophical, rather than specific technical arguments grounded in the details of machine learning. This does not imply that technical arguments are irrelevant, it just means they’re probably not as cruxy to whether people actually believe that doom is probable or not.
(Also to be clear, unless otherwise indicated, in this thread I am using “belief in AI doom” as shorthand for “belief that AI doom is more likely than not” rather than “belief that AI doom is possible and at least a little bit plausible, so therefore worth worrying about.” I think these two views should generally be distinguished.)
Oops, I recognize that, I just misstated it in my original comment.
Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.