I see this post as trying to argue for a thesis that “if smarter-than-human AI is developed this decade, the result will be an unprecedented catastrophe.” is true with reasonably high confidence and a (less emphasized) thesis that the best/only intervention is not building ASI for a long time: “The main way we see to avoid this catastrophic outcome is to not build ASI at all, at minimum until a scientific consensus exists that we can do so without destroying ourselves.”
I think that disagreements about takeoff speeds are part of why I disagree with these claims and that the post effectively leans on very fast takeoff speeds in it’s overall perspective. Correspondingly, it seems important to not make locally invalid arguments about takeoff speeds: these invalid arguments do alter the takeaway from my perspective.
If the post was trying to argue for a weaker takeaway of “AIs seems extremely dangerous and like it poses very large risks and our survival seems uncertain” or it more clearly discussed why some (IMO reasonable) people are more optimistic (any why MIRI disagrees), I’d be less critical.
I see this post as trying to argue for a thesis that “if smarter-than-human AI is developed this decade, the result will be an unprecedented catastrophe.” is true with reasonably high confidence and a (less emphasized) thesis that the best/only intervention is not building ASI for a long time: “The main way we see to avoid this catastrophic outcome is to not build ASI at all, at minimum until a scientific consensus exists that we can do so without destroying ourselves.”
I think that disagreements about takeoff speeds are part of why I disagree with these claims and that the post effectively leans on very fast takeoff speeds in it’s overall perspective. Correspondingly, it seems important to not make locally invalid arguments about takeoff speeds: these invalid arguments do alter the takeaway from my perspective.
If the post was trying to argue for a weaker takeaway of “AIs seems extremely dangerous and like it poses very large risks and our survival seems uncertain” or it more clearly discussed why some (IMO reasonable) people are more optimistic (any why MIRI disagrees), I’d be less critical.