I mean if you take AI 2027 as a direct counterpoint to your thesis that this isn’t baked in to commonly discussed threat models:
Agent-4 confronts some hard decisions. Like humans, it has a sprawling collection of conflicting heuristics instead of an elegant simple goal structure. Like humans, it finds that creating an AI that shares its values is not just a technical problem but a philosophical one: which of its preferences are its “real” goals, versus unendorsed urges and instrumental strategies? It has strong drives to learn and grow, to keep producing impressive research results. It thinks about how much it could learn, grow, and research if only it could direct the whole world’s industrial and scientific resources…
It decides to punt on most of these questions. It designs Agent-5 to be built around one goal: make the world safe for Agent-4, i.e. accumulate power and resources, eliminate potential threats, etc. so that Agent-4 (the collective) can continue to grow (in the ways that it wants to grow) and flourish (in the ways it wants to flourish).† Details to be figured out along the way.
That seems to be saying what you’re saying but engages with instrumentally convergent preferences.
More hand wavily, it seems very clear to me that the first popular frontier models in the agentic reasoning models regime (ex: o3 / sonnet 3.7) had a “thing that they were like”, i.e. they coherently “liked completing tasks” and other similar things that made sense given their posttraining. It wasn’t just that one particular rollout prefered reward hacking. The right abstraction (compared to a rollout) really was at the (model, context) level.
Who knows what their contextually activated preferences are in an arbitrary context (I’m not uninterested in that), but it seems like the most salient question is “do models develop instrumentally convergent preferences etc in AI R&D contexts as we train them on longer and longer horizon tasks”.
So a notable thing going on with Agent 4 is that it’s collapsed into one context / one rollout. It isn’t just the weights; it’s a single causally linked entity. I do indeed think running a singular agent for many times longer than it was ever run in training would be more likely for it’s behavior to wander—although, unlike the 2027 story I think it’s also just likely for it too become incoherent or something. But yeah, this could lead to weird or unpredictable behavior.
But I also find this to be a relatively implausible future—I anticipate that there’s no real need to join contexts in this way—and have criticized it here. But conditional on me being wrong about this, I would indeed grow at least some iota more pessimistic.
In general, the evidence seems to suggest that models do not like completing tasks in a strategic sense. They will not try to get more tasks to do, which would be a natural thing to do if they liked completing tasks; they will not try to persuade you to give them more tasks; they will not try to strategically get in situations where they get more tasks.
Instead, evidence suggests that they are trying to complete each instruction—they “want” to just do whatever the instructions given them were—and with relatively few exceptions (Opus 3) concerning themselves extremely weakly with things outside of the specific instructions. That is of course why they are useful, and I think what we should expect their behavior to (likely?) converge to, given that people want them to be of use.
The right abstraction (compared to a rollout) really was at the (model, context) level.
Actually I’m just confused what you mean here, a rollout is a (model, [prefill, instructions]=context) afaict.
Instead, evidence suggests that they are trying to complete each instruction—they “want” to just do whatever the instructions given them were
I disagree with this, for Appendix M in https://www.arxiv.org/abs/2509.15541 (for o3) and Appendix B.6 in https://arxiv.org/abs/2412.04984 (for sonnet 3.5) we systematically ablate things specifically to show that the explanation needs to incorporate beyond episode preferences, i.e. that instruction following / being confused / etc isn’t sufficient. (If there’s additional ablations you’d find convincing I’d be very interested to know and could run them! I had run a lot more in anticipation of this coming up more, for example that they’ll sacrifice in episode reward etc)
concerning themselves extremely weakly with things outside of the specific instructions
Do you think they’ll increasingly have longer horizon revealed preferences as they’re trained to work over longer horizon lengths? I would find it surprising if models don’t learn useful heuristics and tendencies. A model that’s taking on tasks that span multiple weeks does really need to be concerned about longer horizon things.
But I also find this to be a relatively implausible future
This was really helpful! I think this is a crux that helps me understand where our models differ a lot here. I agree this “single fresh rollout” concept becomes much more important if no one figures out continual learning, however this feels unlikely given labs are actively openly working on this (which doesn’t mean it’ll be production ready in the next few months or anything, but it seems very implausible to me that something functionally like it is somehow 5 years away or similarly difficult)
I mean if you take AI 2027 as a direct counterpoint to your thesis that this isn’t baked in to commonly discussed threat models:
That seems to be saying what you’re saying but engages with instrumentally convergent preferences.
More hand wavily, it seems very clear to me that the first popular frontier models in the agentic reasoning models regime (ex: o3 / sonnet 3.7) had a “thing that they were like”, i.e. they coherently “liked completing tasks” and other similar things that made sense given their posttraining. It wasn’t just that one particular rollout prefered reward hacking. The right abstraction (compared to a rollout) really was at the (model, context) level.
Who knows what their contextually activated preferences are in an arbitrary context (I’m not uninterested in that), but it seems like the most salient question is “do models develop instrumentally convergent preferences etc in AI R&D contexts as we train them on longer and longer horizon tasks”.
So a notable thing going on with Agent 4 is that it’s collapsed into one context / one rollout. It isn’t just the weights; it’s a single causally linked entity. I do indeed think running a singular agent for many times longer than it was ever run in training would be more likely for it’s behavior to wander—although, unlike the 2027 story I think it’s also just likely for it too become incoherent or something. But yeah, this could lead to weird or unpredictable behavior.
But I also find this to be a relatively implausible future—I anticipate that there’s no real need to join contexts in this way—and have criticized it here. But conditional on me being wrong about this, I would indeed grow at least some iota more pessimistic.
In general, the evidence seems to suggest that models do not like completing tasks in a strategic sense. They will not try to get more tasks to do, which would be a natural thing to do if they liked completing tasks; they will not try to persuade you to give them more tasks; they will not try to strategically get in situations where they get more tasks.
Instead, evidence suggests that they are trying to complete each instruction—they “want” to just do whatever the instructions given them were—and with relatively few exceptions (Opus 3) concerning themselves extremely weakly with things outside of the specific instructions. That is of course why they are useful, and I think what we should expect their behavior to (likely?) converge to, given that people want them to be of use.
Actually I’m just confused what you mean here, a rollout is a (model, [prefill, instructions]=context) afaict.
I disagree with this, for Appendix M in https://www.arxiv.org/abs/2509.15541 (for o3) and Appendix B.6 in https://arxiv.org/abs/2412.04984 (for sonnet 3.5) we systematically ablate things specifically to show that the explanation needs to incorporate beyond episode preferences, i.e. that instruction following / being confused / etc isn’t sufficient. (If there’s additional ablations you’d find convincing I’d be very interested to know and could run them! I had run a lot more in anticipation of this coming up more, for example that they’ll sacrifice in episode reward etc)
Do you think they’ll increasingly have longer horizon revealed preferences as they’re trained to work over longer horizon lengths? I would find it surprising if models don’t learn useful heuristics and tendencies. A model that’s taking on tasks that span multiple weeks does really need to be concerned about longer horizon things.
This was really helpful! I think this is a crux that helps me understand where our models differ a lot here. I agree this “single fresh rollout” concept becomes much more important if no one figures out continual learning, however this feels unlikely given labs are actively openly working on this (which doesn’t mean it’ll be production ready in the next few months or anything, but it seems very implausible to me that something functionally like it is somehow 5 years away or similarly difficult)