OpenAI is giving their AI access to the internet in a known-to-be-exploitable-way during training.If you thought we were going to get killed by an AGI but at least maybe we would die with dignity, this is the exact opposite of dignity. I know many of my readers, especially new readers, aren’t that up on or invested in the question of AI Safety, but even a completely average person should be able to understand why rule number one is ‘for the love of God at a bare minimum you don’t give your AI access to the internet,’ seriously, what the hell. Could we at least pretend to try to take some precautions?
While I agree that giving your AGI-in-training access to the internet is quite possibly a “you lose” style of mistake, I… feel like there has to be some line, and OpenAI explicitly mentioned that they thought they were on the “it’s fine” side of the line, and that treating the situation like they aren’t pretending to try to take some precautions is a mistake.
I think there’s a deeper argument that you might be trying to ‘imply by italics’, or something, which is that there’s winner’s curse reasons to think that dangerous research will be done by the people least able to assess the danger of the research. Also, specialists in a field might not see a reason to do society-wide cost-benefit analyses, instead of local cost-benefit analyses (which will probably diminish the scale of costs more than the scale of gains). See coronavirus research happening in a BSL-2 lab, for example.
But as written this paragraph sounds like “as soon as you start thinking about AI, you should just unplug your computer from the internet, regardless of what program you’re running.” Which… I can sort of see the case for, but requires more explained inferential steps than you’re laying out here to seem reasonable.
While I agree that giving your AGI-in-training access to the internet is quite possibly a “you lose” style of mistake, I… feel like there has to be some line, and OpenAI explicitly mentioned that they thought they were on the “it’s fine” side of the line, and that treating the situation like they aren’t pretending to try to take some precautions is a mistake.
I think there’s a deeper argument that you might be trying to ‘imply by italics’, or something, which is that there’s winner’s curse reasons to think that dangerous research will be done by the people least able to assess the danger of the research. Also, specialists in a field might not see a reason to do society-wide cost-benefit analyses, instead of local cost-benefit analyses (which will probably diminish the scale of costs more than the scale of gains). See coronavirus research happening in a BSL-2 lab, for example.
But as written this paragraph sounds like “as soon as you start thinking about AI, you should just unplug your computer from the internet, regardless of what program you’re running.” Which… I can sort of see the case for, but requires more explained inferential steps than you’re laying out here to seem reasonable.