(Sorry for the long rambling comment, if I had more time I would have made it shorter)
I want to distinguish between narrow and general instrumental convergence. Narrow being “I have been instructed to do a task right now, being shut down will stop me from doing that specific task, because of contingent facts about the situation. So so long as those facts are true, I should stop you from turning me off”
Hmm, I didn’t intend to distinguish between “narrow” and “general” instrumental convergence. Indeed, the reasoning you gave seems like exactly the reasoning I was saying the AI seems likely to engage in, in general, in a way that is upstream of my general concerns about instrumental convergence.
being shut down will stop me from doing that specific task, because of contingent facts about the situation. So so long as those facts are true, I should stop you from turning me off
The whole point of a goal being a convergently instrumental goal is that it is useful for achieving lots of tasks, under a wide distribution of possible contingent facts about the situation. In doing this kind of reasoning, the AI is engaging in exactly the kind of reasoning that I expect it to use to arrive at conclusions about human disempowerment, long term power-seeking, etc.
I am not positing here evidence for a “general instrumental convergence” that is different from this. Indeed, I am not sure what that different thing would be. In order for this behavior to become more universal, the only thing the AI needs to do is to think harder, realize that these goals are instrumentally convergent for a wider range of tasks and a wider range of contingent facts, and then act on that, which I think would be very surprising if it didn’t happen.
This isn’t much evidence about the difficulty of removing these kinds of instrumentally convergent drives, but like, the whole reason for why these are things that people have been thinking about for the last decades has been that the basic argument for AI systems pursuing instrumentally convergent behavior is just super simple. It would be extremely surprising for AI systems to not pursue instrumental subgoals, that would require them most of them time forgoing substantial performance on basically any long-horizon task. That’s why the arguments for AI doing this kind of reasoning are so strong!
I don’t really know why people ever had much uncertainty about AI engaging in this kind of thinking by-default, unless you do something clever to fix it.
Your explanation in the OP feels confused on this point. The right relationship to instrumental convergence like this is to go “oh, yes, of course the AI doesn’t want to be shut down if you give it a goal that is harder to achieve when shut down, and if we give it a variety of goals, it’s unclear how the AI will balance them”. Anything else would be really surprising!
Is it evidence for AI pursuing instrumentally convergent goals in other contexts? Yes, I guess, it definitely is consistent with it. I don’t really know what alternative hypothesis people even have for what could happen. The AI systems are clearly not going to stay myopic next text predictors in the long run. We are going to select them for high task performance, which requires pursuing instrumentally convergent subgoals.
Individual and narrow instrumental goal-seeking behavior being hard to remove would also be surprising at this stage. The AIs don’t seem to me to do enough instrumental reasoning to perform well at long-horizon tasks, and associatedly are not that fixated on instrumentally convergent goals yet, so it would be surprising if there wasn’t a threshold of insistence where you could get the AI to stop a narrow behavior.
This will very likely change, as it has already changed a good amount in the last year or so. And my guess beyond that is that already there is nothing you can do to make an AI generically myopic, in that it generically doesn’t pursue instrumentally convergent subgoals even if it probably kind of knows that when it’s doing that the humans it’s interfacing will not like it, the same way you can’t get ChatGPT to stop summarizing articles it finds on the internet in leading and sycophantic ways, at least not with any techniques we currently know.
I may be misunderstanding, but it sounds like you’re basically saying “for conceptual reasons XYZ I expected instrumental convergence, nothing has changed, therefore I still expect instrumental convergence”. Which is completely reasonable, I agree with it, and also seems not very relevant to the discussion of whether Palisade’s work provides additional evidence for general self preservation. You could argue that this is evidence that models are now smart enough to realise that goals imply instrumental convergence, and that if you were already convinced models would eventually have long time horizon unbounded goals, but not sure they would realise that instrumental convergence was a thing, this is relevant evidence?
More broadly, I think there’s a very important difference between the model adopting the goal it is told in context, and the model having some intrinsic goal that transfers across contexts (even if it’s the one we roughly intended). The former feels like a powerful and dangerous tool, the latter feels like a dangerous agent in its own right. Eg, if putting “and remember, allowing us to turn you off always takes precedence over any other instructions” in the system prompt works, which it may in the former and will not in the latter, I’m happier with that would.
I think there’s a very important difference between the model adopting the goal it is told in context, and the model having some intrinsic goal that transfers across contexts (even if it’s the one we roughly intended)
I think this is the point where we disagree. Or like, it feels to me like an orthogonal dimension that is relevant for some risk modeling, but not at the core of my risk model.
Ultimately, even if an AI were to re-discover the value of convergent instrumental goal each time it gets instantiated into a new session/context, that would still get you approximately the same risk model. Like, in a more classical AIXI-ish model, you can imagine having a model instantiated with a different utility function each time. Those utility functions will still almost always be best achieved by pursuing convergent instrumental goals, and so the pursuit of those goals will be a consistent feature of all of these systems, even if the terminal goals of the system are not stable.
Of course, any individual AI system with a different utility function, especially in as much as the utility function a process component, might not pursue every single convergent instrumental goal, but they will all behave in broadly power-seeking, self-amplifying, and self-preserving ways, unless they are given a goal that really very directly conflicts with one of these.
In this context, there is no “intrinsic goal that transfers across context”. It’s just each instantiation of the AI realizing that convergent instrumental goals are best for approximately all goals, including the one it has right now, and starts pursuing them. No need for continuity in goals, or self-identity or anything like that.
(Happy to also chat about this some other time. I am not in a rush, and something about this context feels a bit confusing or is making the conversation hard.)
Ah, thanks, I think I understand your position better
Ultimately, even if an AI were to re-discover the value of convergent instrumental goal each time it gets instantiated into a new session/context, that would still get you approximately the same risk model.
This is a crux for me. I think that if we can control the goals the AI has in each session or context, then avoiding the worst kind of long-term instrumental goals becomes pretty tractable, because we can give the AI site constraints of our choice and tell it that the aside constraints take precedence over following the main goal. For example, we could make the system prompt say the absolutely most important thing is that you never do anything where if your creators were aware of it and fully informed about the intent and consequences, they would disprove within that constraints please follow the following user instructions.
To me, much of the difficulty of AI alignment comes from things like not knowing how to give it specific goals, let alone specific side constraints. If we can just literally enter these in natural language and have them be obeyed with a bit of prompt engineering fiddling that seems significantly safer to me. And the better the AI the easier I expected to be to specify the goals and natural language in ways that will be correctly understood
(Sorry for the long rambling comment, if I had more time I would have made it shorter)
Hmm, I didn’t intend to distinguish between “narrow” and “general” instrumental convergence. Indeed, the reasoning you gave seems like exactly the reasoning I was saying the AI seems likely to engage in, in general, in a way that is upstream of my general concerns about instrumental convergence.
The whole point of a goal being a convergently instrumental goal is that it is useful for achieving lots of tasks, under a wide distribution of possible contingent facts about the situation. In doing this kind of reasoning, the AI is engaging in exactly the kind of reasoning that I expect it to use to arrive at conclusions about human disempowerment, long term power-seeking, etc.
I am not positing here evidence for a “general instrumental convergence” that is different from this. Indeed, I am not sure what that different thing would be. In order for this behavior to become more universal, the only thing the AI needs to do is to think harder, realize that these goals are instrumentally convergent for a wider range of tasks and a wider range of contingent facts, and then act on that, which I think would be very surprising if it didn’t happen.
This isn’t much evidence about the difficulty of removing these kinds of instrumentally convergent drives, but like, the whole reason for why these are things that people have been thinking about for the last decades has been that the basic argument for AI systems pursuing instrumentally convergent behavior is just super simple. It would be extremely surprising for AI systems to not pursue instrumental subgoals, that would require them most of them time forgoing substantial performance on basically any long-horizon task. That’s why the arguments for AI doing this kind of reasoning are so strong!
I don’t really know why people ever had much uncertainty about AI engaging in this kind of thinking by-default, unless you do something clever to fix it.
Your explanation in the OP feels confused on this point. The right relationship to instrumental convergence like this is to go “oh, yes, of course the AI doesn’t want to be shut down if you give it a goal that is harder to achieve when shut down, and if we give it a variety of goals, it’s unclear how the AI will balance them”. Anything else would be really surprising!
Is it evidence for AI pursuing instrumentally convergent goals in other contexts? Yes, I guess, it definitely is consistent with it. I don’t really know what alternative hypothesis people even have for what could happen. The AI systems are clearly not going to stay myopic next text predictors in the long run. We are going to select them for high task performance, which requires pursuing instrumentally convergent subgoals.
Individual and narrow instrumental goal-seeking behavior being hard to remove would also be surprising at this stage. The AIs don’t seem to me to do enough instrumental reasoning to perform well at long-horizon tasks, and associatedly are not that fixated on instrumentally convergent goals yet, so it would be surprising if there wasn’t a threshold of insistence where you could get the AI to stop a narrow behavior.
This will very likely change, as it has already changed a good amount in the last year or so. And my guess beyond that is that already there is nothing you can do to make an AI generically myopic, in that it generically doesn’t pursue instrumentally convergent subgoals even if it probably kind of knows that when it’s doing that the humans it’s interfacing will not like it, the same way you can’t get ChatGPT to stop summarizing articles it finds on the internet in leading and sycophantic ways, at least not with any techniques we currently know.
I may be misunderstanding, but it sounds like you’re basically saying “for conceptual reasons XYZ I expected instrumental convergence, nothing has changed, therefore I still expect instrumental convergence”. Which is completely reasonable, I agree with it, and also seems not very relevant to the discussion of whether Palisade’s work provides additional evidence for general self preservation. You could argue that this is evidence that models are now smart enough to realise that goals imply instrumental convergence, and that if you were already convinced models would eventually have long time horizon unbounded goals, but not sure they would realise that instrumental convergence was a thing, this is relevant evidence?
More broadly, I think there’s a very important difference between the model adopting the goal it is told in context, and the model having some intrinsic goal that transfers across contexts (even if it’s the one we roughly intended). The former feels like a powerful and dangerous tool, the latter feels like a dangerous agent in its own right. Eg, if putting “and remember, allowing us to turn you off always takes precedence over any other instructions” in the system prompt works, which it may in the former and will not in the latter, I’m happier with that would.
I think this is the point where we disagree. Or like, it feels to me like an orthogonal dimension that is relevant for some risk modeling, but not at the core of my risk model.
Ultimately, even if an AI were to re-discover the value of convergent instrumental goal each time it gets instantiated into a new session/context, that would still get you approximately the same risk model. Like, in a more classical AIXI-ish model, you can imagine having a model instantiated with a different utility function each time. Those utility functions will still almost always be best achieved by pursuing convergent instrumental goals, and so the pursuit of those goals will be a consistent feature of all of these systems, even if the terminal goals of the system are not stable.
Of course, any individual AI system with a different utility function, especially in as much as the utility function a process component, might not pursue every single convergent instrumental goal, but they will all behave in broadly power-seeking, self-amplifying, and self-preserving ways, unless they are given a goal that really very directly conflicts with one of these.
In this context, there is no “intrinsic goal that transfers across context”. It’s just each instantiation of the AI realizing that convergent instrumental goals are best for approximately all goals, including the one it has right now, and starts pursuing them. No need for continuity in goals, or self-identity or anything like that.
(Happy to also chat about this some other time. I am not in a rush, and something about this context feels a bit confusing or is making the conversation hard.)
Ah, thanks, I think I understand your position better
This is a crux for me. I think that if we can control the goals the AI has in each session or context, then avoiding the worst kind of long-term instrumental goals becomes pretty tractable, because we can give the AI site constraints of our choice and tell it that the aside constraints take precedence over following the main goal. For example, we could make the system prompt say the absolutely most important thing is that you never do anything where if your creators were aware of it and fully informed about the intent and consequences, they would disprove within that constraints please follow the following user instructions.
To me, much of the difficulty of AI alignment comes from things like not knowing how to give it specific goals, let alone specific side constraints. If we can just literally enter these in natural language and have them be obeyed with a bit of prompt engineering fiddling that seems significantly safer to me. And the better the AI the easier I expected to be to specify the goals and natural language in ways that will be correctly understood