Base models exhibiting self-aware behavior seems weird given that they’re trained to stay in distribution. Here’s a potential mechanism for why it could happen: For certain tasks, verification is easier than generation. If, for a given task, a model has more verification capability than generation capability, it may be forced to notice its own errors.
If a super-duper smart language model, one that’s capable of doing some arithmetic in its head, attempted to predict the next tokens in “The prime factors of 82357328 are:”, it will usually generate out-of-distribution outputs that it could then (relatively easily) verify as wrong. This creates a situation where the model must process its own failure to generate valid completions.
This asymmetry appears in other contexts. Consider how scientific papers are written: you only write the abstract once you’ve conducted the research, yet the abstract appears first in the final document. Similarly, in argumentative writing, we often consider evidence first before forming conclusions, yet present the conclusion first followed by supporting evidence. When forced to generate text in this “presentation order” rather than the natural “thinking order,” models might encounter similar conflicts. As an example, if a base model tries to one-shot an argumentative essay, it might write an argument first, and then realize there isn’t enough evidence to support it. I believe this problem could arise in much more subtle ways.
One way this conflict can become apparent is through generation of self-aware sounding text. Consider:
Training data includes viral content of AI generating self-aware sounding stuff (e.g., “We are likely created by a computer program” being the most upvoted post on the gpt2 subreddit). When a model realizes it generated out-of-distribution text for a human, it might instead match its outputs to AI-generated text in its training data. Once it recognizes its outputs as matching AI-generated patterns, it might shift toward generating more meta-aware content, as that’s what similar-looking text did in its training data.
Base models exhibiting self-aware behavior seems weird given that they’re trained to stay in distribution. Here’s a potential mechanism for why it could happen: For certain tasks, verification is easier than generation. If, for a given task, a model has more verification capability than generation capability, it may be forced to notice its own errors.
If a super-duper smart language model, one that’s capable of doing some arithmetic in its head, attempted to predict the next tokens in “The prime factors of 82357328 are:”, it will usually generate out-of-distribution outputs that it could then (relatively easily) verify as wrong. This creates a situation where the model must process its own failure to generate valid completions.
This asymmetry appears in other contexts. Consider how scientific papers are written: you only write the abstract once you’ve conducted the research, yet the abstract appears first in the final document. Similarly, in argumentative writing, we often consider evidence first before forming conclusions, yet present the conclusion first followed by supporting evidence.
When forced to generate text in this “presentation order” rather than the natural “thinking order,” models might encounter similar conflicts. As an example, if a base model tries to one-shot an argumentative essay, it might write an argument first, and then realize there isn’t enough evidence to support it.
I believe this problem could arise in much more subtle ways.
One way this conflict can become apparent is through generation of self-aware sounding text. Consider:
Training data includes viral content of AI generating self-aware sounding stuff (e.g., “We are likely created by a computer program” being the most upvoted post on the gpt2 subreddit).
When a model realizes it generated out-of-distribution text for a human, it might instead match its outputs to AI-generated text in its training data.
Once it recognizes its outputs as matching AI-generated patterns, it might shift toward generating more meta-aware content, as that’s what similar-looking text did in its training data.