If I believe “X is incredibly useful but someone might use it to destroy the world,” I can conclude that I should build X and take care to police the sorts of people who get to use it. But if I believe “X is incredibly useful but its very existence might spontaneously destroy the world” then that strategy won’t work… it doesn’t matter who uses it. Maybe there’s another way, or maybe I just shouldn’t build X, but regardless of the solution it’s a different problem.
It’s like the difference between believing that nuclear weapons might some day be directed by humans to overthrow civilization, and believing that a nuclear reaction will cause all of the Earth’s atmosphere to spontaneously ignite. In the first case, we can attempt to control nuclear weapons. In the second case, we must prevent nuclear reactions from ever starting.
Just to be clear: I’m not championing a position here on what sort of threat AGI’s pose. I’m just saying that these are genuinely different threat models.
It seems different to me.
If I believe “X is incredibly useful but someone might use it to destroy the world,” I can conclude that I should build X and take care to police the sorts of people who get to use it. But if I believe “X is incredibly useful but its very existence might spontaneously destroy the world” then that strategy won’t work… it doesn’t matter who uses it. Maybe there’s another way, or maybe I just shouldn’t build X, but regardless of the solution it’s a different problem.
It’s like the difference between believing that nuclear weapons might some day be directed by humans to overthrow civilization, and believing that a nuclear reaction will cause all of the Earth’s atmosphere to spontaneously ignite. In the first case, we can attempt to control nuclear weapons. In the second case, we must prevent nuclear reactions from ever starting.
Just to be clear: I’m not championing a position here on what sort of threat AGI’s pose. I’m just saying that these are genuinely different threat models.