Loose analogy based reasoning over complex and poorly understood systems isn’t reliable. There is kind of only one way for GPT-n to be identical to system 1, and many ways for it to be kind of similar, in a way that is easy to anthropomorphize, but has some subtle alien features.
GPTn contains some data from smart and/or evil humans and humans speaking in riddles or making allusions. Lets suppose this generalizes, and now GPTn is pretending to be an IQ200 cartoon villain, with an evil plot described entirely in terms of references to obscure sources. So when referring to DNA, it says things like “two opposites of the same kind, two twins intertwined, a detectives assistant and an insect did find. Without an alien friend.”
Or maybe it goes full ecologist jargon. It talks about “genetically optimizing species to restore population levels to re-balance ecosystems into a per-anthropogenic equilibrium”. Would an army of minimum wage workers spot that this was talking about wiping out almost all humans? 7
Actually, wouldn’t a naive extrapolation of internet text suggest that superhumanly complicated ideas were likely to come in superhumanly dense jargon?
I mean, if you have a team of linguists and AI experts carefully discussing every sentence, then this particular problem goes away. The sort of operation where, if the AI produces a sentence of Klingon, you are flying in a top Klingon expert before you get the next sentence. But how useful could gpt-n be if used in such a way? On the other extreme, gpt-n is producing internal reasoning text at a terabyte/minute. All you can do with it is grep for some suspicious words, or pass it to another AI model. You can’t even store it for later unless you have a lot of hard drives. Potentially much more useful. And less safe.
After 2 years have passed, I am quite interested in hearing @Fabien Roger’s thoughts on this comment, especially this part “But how useful could gpt-n be if used in such a way? On the other extreme, gpt-n is producing internal reasoning text at a terabyte/minute. All you can do with it is grep for some suspicious words, or pass it to another AI model. You can’t even store it for later unless you have a lot of hard drives. Potentially much more useful. And less safe.”.
Here are ways you could make using GPT-n safer if its Chain-of-Thoughts are understandable by a team of linguists and AI experts:
On a small random subset of deployment Chain-of-Thought, and when running adversarial evaluations / honeypots aimed at tricking the model into revealing a potential misalignment, actually call your team of linguists and AI experts to analyze what GPT-n is generating (it probably isn’t that much text anyway). Not seeing evidence of misalignment in the Chain-of-Thought in such circumstances is evidence that 1. the model is not having misaligned thoughts (in CoT) very frequently and 2. the model is not having such thoughts in circumstances where it is most crucial for GPT-n to be deceptive.
If the model doesn’t use Klingon or dense jargon in its Chain-of-Thoughts (which I predict it won’t), monitor all CoT of GPT-n with a weaker cheaper model (and/or a regex). If something looks suspicious to the weaker monitor, you can:
call your team of linguists and AI experts to analyze it further
or resample the CoT (this likely was just a fluke, since most CoT don’t have worrying signs)
or give up on this particular query (e.g. send an error message to your user, use another weaker model to generate an answer, …)
or ask another copy of GPT-n to analyze the CoT (though this can be tricky to get right because of the risk of correlated failures), only calling your team of linguists and AI experts if more evidence of this CoT being suspicious is found.
This is not perfect, but I think this can represent an important layer of defense against the risks of AIs that are powerful enough to speed up alignment research but that are not vastly superhuman.
Loose analogy based reasoning over complex and poorly understood systems isn’t reliable. There is kind of only one way for GPT-n to be identical to system 1, and many ways for it to be kind of similar, in a way that is easy to anthropomorphize, but has some subtle alien features.
GPTn contains some data from smart and/or evil humans and humans speaking in riddles or making allusions. Lets suppose this generalizes, and now GPTn is pretending to be an IQ200 cartoon villain, with an evil plot described entirely in terms of references to obscure sources. So when referring to DNA, it says things like “two opposites of the same kind, two twins intertwined, a detectives assistant and an insect did find. Without an alien friend.”
Or maybe it goes full ecologist jargon. It talks about “genetically optimizing species to restore population levels to re-balance ecosystems into a per-anthropogenic equilibrium”. Would an army of minimum wage workers spot that this was talking about wiping out almost all humans? 7
Actually, wouldn’t a naive extrapolation of internet text suggest that superhumanly complicated ideas were likely to come in superhumanly dense jargon?
I mean, if you have a team of linguists and AI experts carefully discussing every sentence, then this particular problem goes away. The sort of operation where, if the AI produces a sentence of Klingon, you are flying in a top Klingon expert before you get the next sentence. But how useful could gpt-n be if used in such a way? On the other extreme, gpt-n is producing internal reasoning text at a terabyte/minute. All you can do with it is grep for some suspicious words, or pass it to another AI model. You can’t even store it for later unless you have a lot of hard drives. Potentially much more useful. And less safe.
After 2 years have passed, I am quite interested in hearing @Fabien Roger’s thoughts on this comment, especially this part “But how useful could gpt-n be if used in such a way? On the other extreme, gpt-n is producing internal reasoning text at a terabyte/minute. All you can do with it is grep for some suspicious words, or pass it to another AI model. You can’t even store it for later unless you have a lot of hard drives. Potentially much more useful. And less safe.”.
Here are ways you could make using GPT-n safer if its Chain-of-Thoughts are understandable by a team of linguists and AI experts:
On a small random subset of deployment Chain-of-Thought, and when running adversarial evaluations / honeypots aimed at tricking the model into revealing a potential misalignment, actually call your team of linguists and AI experts to analyze what GPT-n is generating (it probably isn’t that much text anyway). Not seeing evidence of misalignment in the Chain-of-Thought in such circumstances is evidence that 1. the model is not having misaligned thoughts (in CoT) very frequently and 2. the model is not having such thoughts in circumstances where it is most crucial for GPT-n to be deceptive.
If the model doesn’t use Klingon or dense jargon in its Chain-of-Thoughts (which I predict it won’t), monitor all CoT of GPT-n with a weaker cheaper model (and/or a regex). If something looks suspicious to the weaker monitor, you can:
call your team of linguists and AI experts to analyze it further
or resample the CoT (this likely was just a fluke, since most CoT don’t have worrying signs)
or give up on this particular query (e.g. send an error message to your user, use another weaker model to generate an answer, …)
or ask another copy of GPT-n to analyze the CoT (though this can be tricky to get right because of the risk of correlated failures), only calling your team of linguists and AI experts if more evidence of this CoT being suspicious is found.
This is not perfect, but I think this can represent an important layer of defense against the risks of AIs that are powerful enough to speed up alignment research but that are not vastly superhuman.