But people attempting to box smart unaligned AIs, or believing that boxed AIs are significantly safer because they can’t access the internet, seems to me like a bad situation. An AI smart enough to cause risk with internet access is very likely to be able to cause risk anyway, and at best you are creating a super unstable situation where a lab leak is catastrophic.
I do think we are likely to be in a bad spot, and talking to people at OpenAI, Deepmind and Anthropic (e.g. the places where most of the heavily-applied prosaic alignment work is happening), I do sure feel unhappy that their plan seems to be to be banking on this kind of terrifying situation, which is part of why I am so pessimistic about the likelihood of doom.
If I had a sense that these organizations are aiming for a much more comprehensive AI Alignment solution that doesn’t rely on extensive boxing I would agree with you more, but I am currently pretty sure they aren’t ensuring that, and by-default will hope that they can get far enough ahead with boxing-like strategies.
talking to people at OpenAI, Deepmind and Anthropic [...]
If I had a sense that these organizations are aiming for a much more comprehensive AI Alignment solution that doesn’t rely on extensive boxing I would agree with you more, but I am currently pretty sure they aren’t ensuring that, and by-default will hope that they can get far enough ahead with boxing-like strategies.
… Who are you talking to? I’m having trouble naming a single person at either of OpenAI or Anthropic who seems to me to be interested in extensive boxing (though admittedly I don’t know them that well). At DeepMind there’s a small minority who think about boxing, but I think even they wouldn’t think of this as a major aspect of their plan.
I agree that they aren’t aiming for a “much more comprehensive AI alignment solution” in the sense you probably mean it but saying “they rely on boxing” seems wildly off.
My best-but-still-probably-incorrect guess is that you hear people proposing schemes that seem to you like they will obviously not work in producing intent aligned systems and so you assume that the people proposing them also believe that and are putting their trust in boxing, rather than noticing that they have different empirical predictions about how likely those schemes are to produce intent aligned systems.
Here is an example quote from the latest OpenAI blogpost on AI Alignment:
Language models are particularly well-suited for automating alignment research because they come “preloaded” with a lot of knowledge and information about human values from reading the internet. Out of the box, they aren’t independent agents and thus don’t pursue their own goals in the world. To do alignment research they don’t need unrestricted access to the internet. Yet a lot of alignment research tasks can be phrased as natural language or coding tasks.
This sounds super straightforwardly to me like the plan of “we are going to train non-agentic AIs that will help us with AI Alignment research, and will limit their ability to influence the world, by e.g. not giving them access to the internet”. I don’t know whether “boxing” is the exact right word here, but it’s the strategy I was pointing to here.
Importantly, we only need “narrower” AI systems that have human-level capabilities in the relevant domains to do as well as humans on alignment research. We expect these AI systems are easier to align than general-purpose systems or systems much smarter than humans.
I would have guessed the claim is “boxing the AI system during training will be helpful for ensuring that the resulting AI system is aligned”, rather than “after training, the AI system might be trying to pursue its own goals, but we’ll ensure it can’t accomplish them via boxing”. But I can see your interpretation as well.
Oh, I do think a bunch of my problems with WebGPT is that we are training the system on direct internet access.
I agree that “train a system with internet access, but then remove it, then hope that it’s safe”, doesn’t really make much sense. In-general, I expect bad things to happen during training, and separately, a lot of the problems that I have with training things on the internet is that it’s an environment that seems like it would incentivize a lot of agency and make supervision really hard because you have a ton of permanent side effects.
Oh you’re making a claim directly about other people’s approaches, not about what other people think about their own approaches. Okay, that makes sense (though I disagree).
Oh, I do think a bunch of my problems with WebGPT is that we are training the system on direct internet access.
I agree that “train a system with internet access, but then remove it, then hope that it’s safe”, doesn’t really make much sense.
I was suggesting that the plan was “train a system without Internet access, then add it at deployment time” (aka “box the AI system during training”). I wasn’t at any point talking about WebGPT.
I do think we are likely to be in a bad spot, and talking to people at OpenAI, Deepmind and Anthropic (e.g. the places where most of the heavily-applied prosaic alignment work is happening), I do sure feel unhappy that their plan seems to be to be banking on this kind of terrifying situation, which is part of why I am so pessimistic about the likelihood of doom.
If I had a sense that these organizations are aiming for a much more comprehensive AI Alignment solution that doesn’t rely on extensive boxing I would agree with you more, but I am currently pretty sure they aren’t ensuring that, and by-default will hope that they can get far enough ahead with boxing-like strategies.
… Who are you talking to? I’m having trouble naming a single person at either of OpenAI or Anthropic who seems to me to be interested in extensive boxing (though admittedly I don’t know them that well). At DeepMind there’s a small minority who think about boxing, but I think even they wouldn’t think of this as a major aspect of their plan.
I agree that they aren’t aiming for a “much more comprehensive AI alignment solution” in the sense you probably mean it but saying “they rely on boxing” seems wildly off.
My best-but-still-probably-incorrect guess is that you hear people proposing schemes that seem to you like they will obviously not work in producing intent aligned systems and so you assume that the people proposing them also believe that and are putting their trust in boxing, rather than noticing that they have different empirical predictions about how likely those schemes are to produce intent aligned systems.
Here is an example quote from the latest OpenAI blogpost on AI Alignment:
This sounds super straightforwardly to me like the plan of “we are going to train non-agentic AIs that will help us with AI Alignment research, and will limit their ability to influence the world, by e.g. not giving them access to the internet”. I don’t know whether “boxing” is the exact right word here, but it’s the strategy I was pointing to here.
The immediately preceding paragraph is:
I would have guessed the claim is “boxing the AI system during training will be helpful for ensuring that the resulting AI system is aligned”, rather than “after training, the AI system might be trying to pursue its own goals, but we’ll ensure it can’t accomplish them via boxing”. But I can see your interpretation as well.
Oh, I do think a bunch of my problems with WebGPT is that we are training the system on direct internet access.
I agree that “train a system with internet access, but then remove it, then hope that it’s safe”, doesn’t really make much sense. In-general, I expect bad things to happen during training, and separately, a lot of the problems that I have with training things on the internet is that it’s an environment that seems like it would incentivize a lot of agency and make supervision really hard because you have a ton of permanent side effects.
Oh you’re making a claim directly about other people’s approaches, not about what other people think about their own approaches. Okay, that makes sense (though I disagree).
I was suggesting that the plan was “train a system without Internet access, then add it at deployment time” (aka “box the AI system during training”). I wasn’t at any point talking about WebGPT.