Imagine that we lived in a universe in which it was plausible that the LHC creates a black hole or causes false vacuum collapse. It seems to me that such a universe could still have a techno-economic trajectory broadly similar to our own, for the same reasons. So, in that universe, would it make sense to argue “the LHC cannot destroy the world because its cost is an insufficient fraction of world GDP[1]”? It seems to me it would be strange there in a similar way how the economic argument about AI is strange here.
The “continuous view” argument is about takeoff speeds, not about AI risk?
If AI risk arose from narrow systems that couldn’t produce a billion dollars of value then I’d expect that risk could arise more discontinuously from a new paradigm. But AI risk arises from systems that are sufficiently intelligent that they could produce billions of dollars of value.
The “continuous view” argument is about takeoff speeds, not about AI risk?
If AI risk arose from narrow systems that couldn’t produce a billion dollars of value then I’d expect that risk could arise more discontinuously from a new paradigm. But AI risk arises from systems that are sufficiently intelligent that they could produce billions of dollars of value.