Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.
Thanks for clarifying. I’m sorry you feel strawmanned, but I’m still fairly confused.
Possibly the confusion is that you’re using AI doom to mean >50%? I personally think that it is not very reasonable to get that high based on conceptual arguments someone in the 19th century could understand, and definitely not >90%. But getting to >5% seems totally reasonable to me. I didn’t read this post as arguing that you should be >50% back in the 19th century, though I could easily imagine a given author being overconfident. And specific technical details of ML is totally enough for enough of an update to bring you above or below 50%, so this matters. I personally do not think there’s >50% of doom, but am still very concerned.