Attention conservation notice: I haven’t read all of this, but it seems like basically a gish gallop. I don’t recommend reading it if you’re looking for intelligent critiques of AI risk. I’m downvoting this post because replying to gish gallops takes time and effort but doesn’t particularly lead to new insight, and I’d prefer LW not be a place where we do that.
(edit: oh right, downvoting is disabled. At any rate, I would downvote it if I could.)
(The above is not a reply to the post. The below is a brief one.)
From about half-way through, it seems like a lot of the arguments are «we worry that AI will do this, but humans don’t do it, so AI might not do it either.» Not arguing that AI is not a threat, just that there exist plausible-on-the-face-of-it instantiations of AI that are not threats.
And we also have «getting AI right will be really hard», which, uh, yes that is exactly the point.
Attention conservation notice: I haven’t read all of this, but it seems like basically a gish gallop. I don’t recommend reading it if you’re looking for intelligent critiques of AI risk. I’m downvoting this post because replying to gish gallops takes time and effort but doesn’t particularly lead to new insight, and I’d prefer LW not be a place where we do that.
(edit: oh right, downvoting is disabled. At any rate, I would downvote it if I could.)
(The above is not a reply to the post. The below is a brief one.)
From about half-way through, it seems like a lot of the arguments are «we worry that AI will do this, but humans don’t do it, so AI might not do it either.» Not arguing that AI is not a threat, just that there exist plausible-on-the-face-of-it instantiations of AI that are not threats.
And we also have «getting AI right will be really hard», which, uh, yes that is exactly the point.