I feel like I’d like the different categories of AI risk attentuation to be referred to as more clearly separate:
AI usability safety—would this gun be safe for a trained professional to use on a shooting range? Will it be reasonably accurate and not explode or backfire?
AI world-impact safety—would it be safe to give out one of these guns for 0.10$ to anyone who wanted one?
AI weird complicated usability safety—would this gun be safe to use if a crazy person tried to use a hundred of them plus a variety of other guns, to make an elaborate Rube Goldberg machine and fire it off with live ammo with no testing?
Like, I hear you, but that is...also not how they teach gun safety. Like, if there is one fact you know about gun safety, it’s that the entire field emphasizes that a gun is inherently dangerous towards anything it is pointed towards.
I mean, that is kinda what I’m trying to get at. I feel like any sufficiently powerful AI should be treated as a dangerous tool, like a gun. It should be used carefully and deliberately.
Instead we’re just letting anyone do whatever with them. For now, nothing too bad has happened, but I feel confident that the danger is real and getting worse quickly as models improve.
I feel like I’d like the different categories of AI risk attentuation to be referred to as more clearly separate:
AI usability safety—would this gun be safe for a trained professional to use on a shooting range? Will it be reasonably accurate and not explode or backfire?
AI world-impact safety—would it be safe to give out one of these guns for 0.10$ to anyone who wanted one?
AI weird complicated usability safety—would this gun be safe to use if a crazy person tried to use a hundred of them plus a variety of other guns, to make an elaborate Rube Goldberg machine and fire it off with live ammo with no testing?
Like, I hear you, but that is...also not how they teach gun safety. Like, if there is one fact you know about gun safety, it’s that the entire field emphasizes that a gun is inherently dangerous towards anything it is pointed towards.
I mean, that is kinda what I’m trying to get at. I feel like any sufficiently powerful AI should be treated as a dangerous tool, like a gun. It should be used carefully and deliberately.
Instead we’re just letting anyone do whatever with them. For now, nothing too bad has happened, but I feel confident that the danger is real and getting worse quickly as models improve.