TL;DR: I think you’re right that much inner alignment conversation is overly-fanciful. But ‘LLMs will do what they’re told’ and ‘deceptive alignment is unjustified’ are non sequiturs. Maybe we mean something different by ‘LLM’? Either way, I think there’s a case for ‘inner homunculi’ after all.
I have always thought that the ‘inner’ conversation is missing something. On the one hand it’s moderately-clearly identifying a type of object, which is a point in favour. Further, as you’ve said yourself, ‘reward is not the optimisation target’ is very obvious but it seems to need spelling out repeatedly (a service the ‘inner/outer’ distinction provides). On the other hand the ‘inner alignment’ conversation seems in practice to distract from the actual issue which is ‘some artefacts are (could be) doing their own planning/deliberation/optimisation’, and ‘inner’ is only properly pointing at a subset of those. (We can totally build, including accidentally, artefacts which do this ‘outside’ the weights of NN.)
You’ve indeed pointed at a few of the more fanciful parts of that discussion here[1], like steganographic gradient hacking. NB steganography per se isn’t entirely wild; we see ML systems use available bandwidth to develop unintuitive communication protocols a lot e.g. in MARL.
I can only assume you mean something very narrow and limited by ‘continuing to scale up LLMs’[2]. I think without specifying what you mean, your words are likely to be misconstrued.
With that said, I think something not far off the ‘inner consequentialist’ is entirely plausible and consistent with observations. In short, how do plan-like outputs emerge without a planning-like computation?[3] I’d say ‘undesired, covert, and consistent-across-situations inner goals’ is a weakman of deceptive alignment. Specialising to LLMs, the point is that they’ve exhibited poorly-understood somewhat-general planning capability, and that not all ways of turning that into competent AI assistants[4] result in aligned planners. Planners can be deceptive (and we have evidence of this).
I think your step from ‘there’s very little reason to expect “undesired, covert, and consistent-across-situations inner goals”...’ to ‘LLMs will continue doing what they’re told...’ is therefore a boldly overconfident non sequitur. (I recognise that the second step there is presented as ‘an alternative’, so it’s not clear whether you actually think that.)
Incidentally, I’d go further and say that, to me (but I guess you disagree?) it’s at least plausible that some ways of turning LLM planning capability into a competent AI assistant actually do result in ‘undesired, covert, and consistent-across-situations inner goals’! Sketch in thread.
FWIW I’m more concerned about highly competent planning coming about by one or another kind of scaffolding, but such a system can just as readily be deceptively aligned as a ‘vanilla’ LLM. If enough of the planning is externalised and understandable and actually monitored in practice, this might be easier to catch.
I’m apparently more sympathetic to fancy than you; in absence of mechanistic observables and in order to extrapolate/predict, we have to apply some imagination. But I agree there’s a lot of fancy; some of it even looks like hallucination getting reified by self-conditioning(!), and that effort could probably be more efficiently spent.
I’m presently (quite badly IMO) trying to anticipate the shape of the next big step in get-things-done/autonomy.
I’ve had a hunch for a while that temporally abstract planning and prediction is key. I strongly suspect you can squeeze more consequential planning out of shortish serial depth than most people give credit for. This is informed by past RL-flavoured stuff like MuZero and its limitations, by observations of humans and animals (inc myself), and by general CS/algos thinking. Actually this is where I get on the LLM train. It seems to me that language is an ideal substrate for temporally abstract planning and prediction, and lots of language data in the wild exemplifies this. NB I don’t think GPTs or LLMs are uniquely on this trajectory, just getting a big bootstrap.
Now, if I had to make the most concrete ‘inner homunculus’ case off the cuff, I’d start in the vicinity of Good Regulator, except a more conjectury version regarding systems-predicting-planners (I am working on sharpening this). Maybe I’d point at Janus’ Simulators post. I suspect there might be something like an impossibility/intractability theorem for predicting planners of the right kind without running a planner of a similar kind. (Handwave!)
I’d observe that GPTs can predict planning-looking actions, including sometimes without CoT. (NOTE here’s where the most concrete and proximal evidence is!) This includes characters engaging in deceit. I’d invoke my loose reasoning regarding temporal abstraction to support the hypothesis that this is ‘more than mere parroting’, and maybe fish for examples quite far from obvious training settings to back this up. Interp would be super, of course! (Relatedly, some of your work on steering policies via activation editing has sparked my interest.)
I think maybe this is enough to transfer some sense of what I’m getting at? At this point, given some (patchy) theory, the evidence is supportive of (among other hypotheses) an ‘inner planning’ hypothesis (of quite indeterminate form).
Finally, one kind or another of ‘conditioning’ is hypothesised to reinforce the consequentialist component(s) ‘somehow’ (handwave again, though I’m hardly the only one guilty of handwaving about RLHF et al). I think it’s appropriate to be uncertain what form the inner planning takes, what form the conditioning can/will take, and what the eventual results of that are. Interested in evidence and theory around this area.
So, what are we talking about when we say ‘LLM’? Plain GPT? Well, they definitely don’t ‘do what they’re told’[1]. They exhibit planning-like outputs with the right prompts, typically associated with ‘simulated characters’ at some resolution or other. What about RLHFed GPTs? Well, they sometimes ‘do what they’re told’. They also exhibit planning-like outputs with the right prompts, and it’s mechanistically very unclear how they’re getting them.
unless you mean predicting the next token (I’m pretty sure you don’t mean this?), which they do quite well, though we don’t know how, nor when it’ll fail
TL;DR: I think you’re right that much inner alignment conversation is overly-fanciful. But ‘LLMs will do what they’re told’ and ‘deceptive alignment is unjustified’ are non sequiturs. Maybe we mean something different by ‘LLM’? Either way, I think there’s a case for ‘inner homunculi’ after all.
I have always thought that the ‘inner’ conversation is missing something. On the one hand it’s moderately-clearly identifying a type of object, which is a point in favour. Further, as you’ve said yourself, ‘reward is not the optimisation target’ is very obvious but it seems to need spelling out repeatedly (a service the ‘inner/outer’ distinction provides). On the other hand the ‘inner alignment’ conversation seems in practice to distract from the actual issue which is ‘some artefacts are (could be) doing their own planning/deliberation/optimisation’, and ‘inner’ is only properly pointing at a subset of those. (We can totally build, including accidentally, artefacts which do this ‘outside’ the weights of NN.)
You’ve indeed pointed at a few of the more fanciful parts of that discussion here[1], like steganographic gradient hacking. NB steganography per se isn’t entirely wild; we see ML systems use available bandwidth to develop unintuitive communication protocols a lot e.g. in MARL.
I can only assume you mean something very narrow and limited by ‘continuing to scale up LLMs’[2]. I think without specifying what you mean, your words are likely to be misconstrued.
With that said, I think something not far off the ‘inner consequentialist’ is entirely plausible and consistent with observations. In short, how do plan-like outputs emerge without a planning-like computation?[3] I’d say ‘undesired, covert, and consistent-across-situations inner goals’ is a weakman of deceptive alignment. Specialising to LLMs, the point is that they’ve exhibited poorly-understood somewhat-general planning capability, and that not all ways of turning that into competent AI assistants[4] result in aligned planners. Planners can be deceptive (and we have evidence of this).
I think your step from ‘there’s very little reason to expect “undesired, covert, and consistent-across-situations inner goals”...’ to ‘LLMs will continue doing what they’re told...’ is therefore a boldly overconfident non sequitur. (I recognise that the second step there is presented as ‘an alternative’, so it’s not clear whether you actually think that.)
Incidentally, I’d go further and say that, to me (but I guess you disagree?) it’s at least plausible that some ways of turning LLM planning capability into a competent AI assistant actually do result in ‘undesired, covert, and consistent-across-situations inner goals’! Sketch in thread.
FWIW I’m more concerned about highly competent planning coming about by one or another kind of scaffolding, but such a system can just as readily be deceptively aligned as a ‘vanilla’ LLM. If enough of the planning is externalised and understandable and actually monitored in practice, this might be easier to catch.
I’m apparently more sympathetic to fancy than you; in absence of mechanistic observables and in order to extrapolate/predict, we have to apply some imagination. But I agree there’s a lot of fancy; some of it even looks like hallucination getting reified by self-conditioning(!), and that effort could probably be more efficiently spent.
Like, already the default scaling work includes multimodality and native tool/API-integration and interleaved fine-tuning with self-supervised etc.
Yes, a giant lookup table of exactly the right data could do this. But, generalisably, practically and tractably...? We don’t believe NNs are GLUTs.
prompted, fine-tuned, RLHFed, or otherwise-conditioned LLM (ex hypothesi goal-pursuing), scaffolded, some yet-to-be-designed system…
I’m presently (quite badly IMO) trying to anticipate the shape of the next big step in get-things-done/autonomy.
I’ve had a hunch for a while that temporally abstract planning and prediction is key. I strongly suspect you can squeeze more consequential planning out of shortish serial depth than most people give credit for. This is informed by past RL-flavoured stuff like MuZero and its limitations, by observations of humans and animals (inc myself), and by general CS/algos thinking. Actually this is where I get on the LLM train. It seems to me that language is an ideal substrate for temporally abstract planning and prediction, and lots of language data in the wild exemplifies this. NB I don’t think GPTs or LLMs are uniquely on this trajectory, just getting a big bootstrap.
Now, if I had to make the most concrete ‘inner homunculus’ case off the cuff, I’d start in the vicinity of Good Regulator, except a more conjectury version regarding systems-predicting-planners (I am working on sharpening this). Maybe I’d point at Janus’ Simulators post. I suspect there might be something like an impossibility/intractability theorem for predicting planners of the right kind without running a planner of a similar kind. (Handwave!)
I’d observe that GPTs can predict planning-looking actions, including sometimes without CoT. (NOTE here’s where the most concrete and proximal evidence is!) This includes characters engaging in deceit. I’d invoke my loose reasoning regarding temporal abstraction to support the hypothesis that this is ‘more than mere parroting’, and maybe fish for examples quite far from obvious training settings to back this up. Interp would be super, of course! (Relatedly, some of your work on steering policies via activation editing has sparked my interest.)
I think maybe this is enough to transfer some sense of what I’m getting at? At this point, given some (patchy) theory, the evidence is supportive of (among other hypotheses) an ‘inner planning’ hypothesis (of quite indeterminate form).
Finally, one kind or another of ‘conditioning’ is hypothesised to reinforce the consequentialist component(s) ‘somehow’ (handwave again, though I’m hardly the only one guilty of handwaving about RLHF et al). I think it’s appropriate to be uncertain what form the inner planning takes, what form the conditioning can/will take, and what the eventual results of that are. Interested in evidence and theory around this area.
So, what are we talking about when we say ‘LLM’? Plain GPT? Well, they definitely don’t ‘do what they’re told’[1]. They exhibit planning-like outputs with the right prompts, typically associated with ‘simulated characters’ at some resolution or other. What about RLHFed GPTs? Well, they sometimes ‘do what they’re told’. They also exhibit planning-like outputs with the right prompts, and it’s mechanistically very unclear how they’re getting them.
unless you mean predicting the next token (I’m pretty sure you don’t mean this?), which they do quite well, though we don’t know how, nor when it’ll fail