It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.
It seems more likely to me that Hassabis said something like “with things as they stand now, a bad end seems most likely.” They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says “with things as they stand now, a bad end seems likely to be avoided.”
In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.