That paragraph almost makes sense, but it seems to be missing a key sentence or two. Hassabis is “at the core of recent AI fear” and introduced AI to Musk, but then Hassabis changed his mind and proceeded to undo his previous influence? Its hard to imagine those talks—“Oh yeah you know this whole AI risk thing I got you worried about? I was wrong, it’s no big deal now.”
It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.
That paragraph almost makes sense, but it seems to be missing a key sentence or two. Hassabis is “at the core of recent AI fear” and introduced AI to Musk, but then Hassabis changed his mind and proceeded to undo his previous influence? Its hard to imagine those talks—“Oh yeah you know this whole AI risk thing I got you worried about? I was wrong, it’s no big deal now.”
It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.