My name is Alex Turner. I’m a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
TurnTrout
This scans as less “here’s a helpful parable for thinking more clearly” and more “here’s who to sneer at”—namely, at AI optimists. Or “hopesters”, as Eliezer recently called them, which I think is a play on “huckster” (and which accords with this essay analogizing optimists to Ponzi scheme scammers).
I am saddened (but unsurprised) to see few others decrying the obvious strawmen:
what if [the optimists] cried ‘Unfalsifiable!’ when we couldn’t predict whether a phase shift would occur within the next two years exactly?
...
“But now imagine if—like this Spokesperson here—the AI-allowers cried ‘Empiricism!‘, to try to convince you to do the blindly naive extrapolation from the raw data of ‘Has it destroyed the world yet?’ or ‘Has it threatened humans? no not that time with Bing Sydney we’re not counting that threat as credible’.”
Thinly-veiled insults:
Nobody could possibly be foolish enough to reason from the apparently good behavior of AI models too dumb to fool us or scheme, to AI models smart enough to kill everyone; it wouldn’t fly even as a parable, and would just be confusing as a metaphor.
and insinuations of bad faith:
What if, when you tried to reason about why the model might be doing what it was doing, or how smarter models might be unlike stupider models, they tried to shout you down for relying on unreliable theorizing instead of direct observation to predict the future?” The Epistemologist stopped to gasp for breath.
“Well, then that would be stupid,” said the Listener.
“You misspelled ‘an attempt to trigger a naive intuition, and then abuse epistemology in order to prevent you from doing the further thinking that would undermine that naive intuition, which would be transparently untrustworthy if you were allowed to think about it instead of getting shut down with a cry of “Empiricism!”’,” said the Epistemologist.
Apparently Eliezer decided to not take the time to read e.g. Quintin Pope’s actual critiques, but he does have time to write a long chain of strawmen and smears-by-analogy.
As someone who used to eagerly read essays like these, I am quite disappointed.
Nope! I have basically always enjoyed talking with you, even when we disagree.
As I’ve noted in all of these comments, people consistently use terminology when making counting style arguments (except perhaps in Joe’s report) which rules out the person intending the argument to be about function space. (E.g., people say things like “bits” and “complexity in terms of the world model”.)
Aren’t these arguments about simplicity, not counting?
I think they meant that there is an evidential update from “it’s economically useful” upwards on “this way of doing things tends to produce human-desired generalization in general and not just in the specific tasks examined so far.”
Perhaps it’s easy to consider the same style of reasoning via: “The routes I take home from work are strongly biased towards being short, otherwise I wouldn’t have taken them home from work.”
Sorry, I do think you raised a valid point! I had read your comment in a different way.
I think I want to have said: aggressively training AI directly on outcome-based tasks (“training it to be agentic”, so to speak) may well produce persistently-activated inner consequentialist reasoning of some kind (though not necessarily the flavor historically expected). I most strongly disagree with arguments which behave the same for a) this more aggressive curriculum and b) pretraining, and I think it’s worth distinguishing between these kinds of argument.
In other words, shard advocates seem so determined to rebut the “rational EU maximizer” picture that they’re ignoring the most interesting question about shards—namely, how do rational agents emerge from collections of shards?
Personally, I’m not ignoring that question, and I’ve written about it (once) in some detail. Less relatedly, I’ve talked about possible utility function convergence via e.g. A shot at the diamond-alignment problem and my recent comment thread with Wei_Dai.
It’s not that there isn’t more shard theory content which I could write, it’s that I got stuck and burned out before I could get past the 101-level content.
I felt
a) gaslit by “I think everyone already knew this” or even “I already invented this a long time ago” (by people who didn’t seem to understand it); and that
b) I wasn’t successfully communicating many intuitions;[1] and
c) it didn’t seem as important to make theoretical progress anymore, especially since I hadn’t even empirically confirmed some of my basic suspicions that real-world systems develop multiple situational shards (as I later found evidence for in Understanding and controlling a maze-solving policy network).
So I didn’t want to post much on the site anymore because I was sick of it, and decided to just get results empirically.
In terms of its literal content, it basically seems to be a reframing of the “default” stance towards neural networks often taken by ML researchers (especially deep learning skeptics), which is “assume they’re just a set of heuristics”.
I’ve always read “assume heuristics” as expecting more of an “ensemble of shallow statistical functions” than “a bunch of interchaining and interlocking heuristics from which intelligence is gradually constructed.” Note that (at least in my head) the shard view is extremely focused on how intelligence (including agency) is comprised of smaller shards, and the developmental trajectory over which those shards formed.
- ^
The 2022 review indicates that more people appreciated the shard theory posts than I realized at the time.
It’s not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.
Thanks for pointing out that distinction!
See footnote 5 for a nearby argument which I think is valid:
The strongest argument for reward-maximization which I’m aware of is: Human brains do RL and often care about some kind of tight reward-correlate, to some degree. Humans are like deep learning systems in some ways, and so that’s evidence that “learning setups which work in reality” can come to care about their own training signals.
I don’t expect the current paradigm will be insufficient (though it seems totally possible). Off the cuff I expect 75% that something like the current paradigm will be sufficient, with some probability that something else happens first. (Note that “something like the current paradigm” doesn’t just involve scaling up networks.)
“If you don’t include attempts to try new stuff in your training data, you won’t know what happens if you do new stuff, which means you won’t see new stuff as a good opportunity”. Which seems true but also not very interesting, because we want to build capabilities to do new stuff, so this should instead make us update to assume that the offline RL setup used in this paper won’t be what builds capabilities in the limit.
I’m sympathetic to this argument (and think the paper overall isn’t super object-level important), but also note that they train e.g. Hopper policies to hop continuously, even though lots of the demonstrations fall over. That’s something new.
‘reward is not the optimization target!* *except when it is in these annoying exceptions like AlphaZero, but fortunately, we can ignore these, because after all, it’s not like humans or AGI or superintelligences would ever do crazy stuff like “plan” or “reason” or “search”’.
If you’re going to mock me, at least be correct when you do it!
I think that reward is still not the optimization target in AlphaZero (the way I’m using the term, at least). Learning a leaf node evaluator on a given reinforcement signal, and then bootstrapping the leaf node evaluator via MCTS on that leaf node evaluator, does not mean that the aggregate trained system
directly optimizes for the reinforcement signal, or
“cares” about that reinforcement signal,
or “does its best” to optimize the reinforcement signal (as opposed to some historical reinforcement correlate, like winning or capturing pieces or something stranger).
If most of the “optimization power” were coming from e.g. MCTS on direct reward signal, then yup, I’d agree that the reward signal is the primary optimization target of this system. That isn’t the case here.
You might use the phrase “reward as optimization target” differently than I do, but if we’re just using words differently, then it wouldn’t be appropriate to describe me as “ignoring planning.”
To add, here’s an excerpt from the Q&A on How likely is deceptive alignment? :
Question: When you say model space, you mean the functional behavior as opposed to the literal parameter space?
Evan: So there’s not quite a one to one mapping because there are multiple implementations of the exact same function in a network. But it’s pretty close. I mean, most of the time when I’m saying model space, I’m talking either about the weight space or about the function space where I’m interpreting the function over all inputs, not just the training data.
I only talk about the space of functions restricted to their training performance for this path dependence concept, where we get this view where, well, they end up on the same point, but we want to know how much we need to know about how they got there to understand how they generalize.
Agree with a bunch of these points. EG in Reward is not the optimization target I noted that AIXI really does maximize reward, theoretically. I wouldn’t say that AIXI means that we have “produced” an architecture which directly optimizes for reward, because AIXI(-tl) is a bad way to spend compute. It doesn’t actually effectively optimize reward in reality.
I’d consider a model-based RL agent to be “reward-driven” if it’s effective and most of its “optimization” comes from the direct part and not the leaf-node evaluation (as in e.g. AlphaZero, which was still extremely good without the MCTS).
I think it is important to recognise this because I think that this is the way that AI systems will ultimately evolve and also where most of the danger lies vs simply scaling up pure generative models.
“Direct” optimization has not worked—at scale—in the past. Do you think that’s going to change, and if so, why?
Thanks for asking. I do indeed think that setup could be a very bad idea. You train for agency, you might well get agency, and that agency might be broadly scoped.
(It’s still not obvious to me that that setup leads to doom by default, though. Just more dangerous than pretraining LLMs.)
Cool post, and I am excited about (what I’ve heard of) SLT for this reason—but it seems that that post doesn’t directly address the volume question for deep learning in particular? (And perhaps you didn’t mean to imply that the post would address that question.)
It is not known whether the inductive bias of neural network training contains a preference for run-time error-correction. The phenomenon of “backup heads” observed in transformers seems like a good candidate. Can you think of others?
I’ve heard thirdhand (?) of a transformer whose sublayers will dampen their outputs when is added to that sublayer’s input. IE there might be a “target” amount of to have in the residual stream after that sublayer, and the sublayer itself somehow responds to ensure that happens?
If there was some abnormality and there was already a bunch of present, then the sublayer “error corrects” by shrinking its output.
https://twitter.com/ai_risks/status/1765439554352513453 Unlearning dangerous knowledge by using steering vectors to define a loss function over hidden states. in particular, the (“I am a novice at bioweapons”—“I am an expert at bioweapons”) vector. lol.
(it seems to work really well!)
Apparently[1] there was recently some discussion of Survival Instinct in Offline Reinforcement Learning (NeurIPS 2023). The results are very interesting:
On many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with “wrong” reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL’s return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of pessimism in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a “survival instinct”, i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies...
Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is nudged to learn a desirable behavior with imperfect reward but purposely biased data coverage.
But I heard that some people found these results “too good to be true”, with some dismissing it instantly as wrong or mis-stated. I find this ironic, given that the paper was recently published in a top-tier AI conference. Yes, papers can sometimes be bad, but… seriously? You know the thing where lotsa folks used to refuse to engage with AI risk cuz it sounded too weird, without even hearing the arguments? … Yeaaah, absurdity bias.
Anyways, the paper itself is quite interesting. I haven’t gone through all of it yet, but I think I can give a good summary. The github.io is a nice (but nonspecific) summary.
Summary
It’s super important to remember that we aren’t talking about PPO. Boy howdy, we are in a different part of town when it comes to these “offline” RL algorithms (which train on a fixed dataset, rather than generating more of their own data “online”). ATAC, PSPI, what the heck are those algorithms? The important-seeming bits:
Many offline RL algorithms pessimistically initialize the value of unknown states
“Unknown” means: “Not visited in the offline state-action distribution”
Pessimistic means those are assigned a super huge negative value (this is a bit simplified)
Because future rewards are discounted, reaching an unknown state-action pair is bad if it happens soon and less bad if it happens farther in the future
So on an all-zero reward function, a model-based RL policy will learn to stay within the state-action pairs it was demonstrated for as long as possible (“length bias”)
In the case of the gridworld, this means staying on the longest demonstrated path, even if the red lava is rewarded and the yellow key is penalized.
In the case of Hopper, I’m not sure how they represented the states, but if they used non-tabular policies, this probably looks like “repeat the longest portion of demonstrated policies without falling over” (because that leads to the pessimistic penalty, and most of the data looked like walking successfully due to length bias, so that kind of data is least likely to be penalized).
On a negated reward function (which e.g. penalizes the Hopper for staying upright and rewards for falling over), if falling over still leads to a terminal/unknown state-action, that leads to a huge negative penalty. So it’s optimal to keep hopping whenever
For example, if the original per-timestep reward for staying upright was 1, and the original penalty for falling over was −1, then now the policy gets penalized for staying upright and rewarded for falling over! At , it’s therefore optimal to stay upright whenever
which holds whenever the pessimistic penalty is at least 12.3. That’s not too high, is it? (When I was in my graduate RL class, we’d initialize the penalties to −1000!)
Significance
DPO, for example, is an offline RL algorithm. It’s plausible that frontier models will be trained using that algorithm. So, these results are more relevant if future DPO variants use pessimism and if the training data (e.g. example user/AI interactions) last for more turns when they’re actually helpful for the user.
While it may be tempting to dismiss these results as irrelevant because “length won’t perfectly correlate with goodness so there won’t be positive bias”, I think that would be a mistake. When analyzing the performance and alignment properties of an algorithm, I think it’s important to have a clear picture of all relevant pieces of the algorithm. The influence of length bias and the support of the offline dataset are additional available levers for aligning offline RL-trained policies.
To close on a familiar note, this is yet another example of how “reward” is not the only important quantity to track in an RL algorithm. I also think it’s a mistake to dismiss results like this instantly; this offers an opportunity to reflect on what views and intuitions led to the incorrect judgment.
- ^
I can’t actually check because I only check that stuff on Mondays.
Your comment is switching the hypothesis being considered. As I wroteelsewhere:Seems to me that a lot of (but not all) scheming speculation is just about sufficiently large pretrained predictive models, period. I think it’s worth treating these cases separately. My strong objections are basically to the “and then goal optimization is a good way to minimize loss in general!” steps.
If the argument for scheming is “we will train them directly to achieve goals in a consequentialist fashion”, then we don’t need all this complicated reasoning about UTM priors or whatever.
I think that you still haven’t quite grasped what I was saying. Reward is not the optimization target totally applies here. (It was the post itself which only analyzed the model-free case, not that the lesson only applies to the model-free case.)
In the partial quote you provided, I was discussing two specific algorithms which are highly dissimilar to those being discussed here. If (as we were discussing), you’re doing MCTS (or “full-blown backwards induction”) on reward for the leaf nodes, the system optimizes the reward. That is—if most of the optimization power comes from explicit search on an explicit reward criterion (as in AIXI), then you’re optimizing for reward. If you’re doing e.g. AlphaZero, that aggregate system isn’t optimizing for reward.
Despite the derision which accompanies your discussion of Reward is not the optimization target, it seems to me that you still do not understand the points I’m trying to communicate. You should be aware that I don’t think you understand my views or that post’s intended lesson. As I offered before, I’d be open to discussing this more at length if you want clarification.
CC @faul_sname