random brainstorming about optimizeryness vs controller/lookuptableyness:
let’s think of optimizers as things that reliably steer a broad set of initial states to some specific terminal state
seems like there are two things we care about (at least):
retargetability: it should be possible to change the policy to achieve different terminal states (but this is an insufficiently strong condition, because LUTs also trivially meet this condition, because we can always just completely rewrite the LUT. maybe the actual condition we want is that the complexity of the map is less than the complexity of just the diff or something?)
(in other words, in some sense it should be “easy” to rewrite a small subset or otherwise make a simple diff to the policy to change what final goal is achieved)
(maybe related idea: instrumental convergence means most goals reuse lots of strategies/circuitry between each other)
robustness: it should reliably achieve its goal across a wide range of initial states.
a LUT trained with a little bit of RL will be neither retargetable nor robust. a LUT trained with galactic amounts of RL to do every possible initial state optimally is robust but not retargetable (this is reasonable: robustness is only a property of the functional behavior so whether it’s a LUT internally shouldn’t matter; retargetability is a property of the actual implementation so it does matter). a big search loop (the most extreme of which is AIXI, which is 100% search) is very retargetable, and depending on how hard it searches is varying degrees of robustness.
(however, in practice with normal amounts of compute a LUT is never robust, this thought experiment only highlights differences that remain in the limit)
what do we care about these properties for?
efficacy of filtering bad behaviors in pretraining: sufficiently good robustness means doing things that achieve the goal even in states that it never saw during training, and then even in states that require strategies that it never saw during training. if we filter out deceptive alignment from the data, then the model has to do some generalizing to figure out that this is a strategy that can be used to better accomplish its goal (as a sanity check that robustness is the thing here: a LUT never trained on deceptive alignment will never do it, but one that is trained on it will do it, a sufficiently powerful optimizer will always do it)
arguments about updates wrt “goal”: the deceptive alignment argument hinges a lot on “gradient of the goal” making sense. for example when we argue that the gradient on the model can be decomposed into one component that updates the goal to be more correct and another component that updates the capabilities to be more deceptive, we make this assumption. even if we assume away path dependence, the complexity argument depends a lot on the complexity being roughly equal to complexity of goal + complexity of general goal seeking circuitry, independent of goal.
arguments about difficulty of disentangling correct and incorrect behaviors: there’s a dual of retargetability which is something like the extent to which you can make narrow interventions to the behaviour. (some kind of “anti naturalness” argument)
[conjecture 1: retargetability == complexity can be decomposed == gradient of goal is meaningful. conjecture 2: gradient of goal is meaningful/complexity decomposition implies deceptive alignment (maybe we can also find some necessary condition?)]
how do we formalize retargetability?
maybe something like there exists a homeomorphism from the goal space to NNs with that goal
problem: doesn’t really feel very satisfying and doesn’t work at all for discrete things
maybe complexity: retargetable if it has a really simple map from goals to NNs with goals, conditional on another NN with that goal
problem: the training process of just training another NN from scratch on the new goal and ignoring the given NN could potentially be quite simple
maybe complexity+time: seems reasonable to assume retraining is expensive (and maybe for decomposability we also consider complexity+time)
random idea: the hypothesis that complexity can be approximately decomposed into a goal component and a reasoning component is maybe a good formalization of (a weak version of) orthogonality?
random brainstorming about optimizeryness vs controller/lookuptableyness:
let’s think of optimizers as things that reliably steer a broad set of initial states to some specific terminal state seems like there are two things we care about (at least):
retargetability: it should be possible to change the policy to achieve different terminal states (but this is an insufficiently strong condition, because LUTs also trivially meet this condition, because we can always just completely rewrite the LUT. maybe the actual condition we want is that the complexity of the map is less than the complexity of just the diff or something?) (in other words, in some sense it should be “easy” to rewrite a small subset or otherwise make a simple diff to the policy to change what final goal is achieved) (maybe related idea: instrumental convergence means most goals reuse lots of strategies/circuitry between each other)
robustness: it should reliably achieve its goal across a wide range of initial states.
a LUT trained with a little bit of RL will be neither retargetable nor robust. a LUT trained with galactic amounts of RL to do every possible initial state optimally is robust but not retargetable (this is reasonable: robustness is only a property of the functional behavior so whether it’s a LUT internally shouldn’t matter; retargetability is a property of the actual implementation so it does matter). a big search loop (the most extreme of which is AIXI, which is 100% search) is very retargetable, and depending on how hard it searches is varying degrees of robustness.
(however, in practice with normal amounts of compute a LUT is never robust, this thought experiment only highlights differences that remain in the limit)
what do we care about these properties for?
efficacy of filtering bad behaviors in pretraining: sufficiently good robustness means doing things that achieve the goal even in states that it never saw during training, and then even in states that require strategies that it never saw during training. if we filter out deceptive alignment from the data, then the model has to do some generalizing to figure out that this is a strategy that can be used to better accomplish its goal (as a sanity check that robustness is the thing here: a LUT never trained on deceptive alignment will never do it, but one that is trained on it will do it, a sufficiently powerful optimizer will always do it)
arguments about updates wrt “goal”: the deceptive alignment argument hinges a lot on “gradient of the goal” making sense. for example when we argue that the gradient on the model can be decomposed into one component that updates the goal to be more correct and another component that updates the capabilities to be more deceptive, we make this assumption. even if we assume away path dependence, the complexity argument depends a lot on the complexity being roughly equal to complexity of goal + complexity of general goal seeking circuitry, independent of goal.
arguments about difficulty of disentangling correct and incorrect behaviors: there’s a dual of retargetability which is something like the extent to which you can make narrow interventions to the behaviour. (some kind of “anti naturalness” argument)
[conjecture 1: retargetability == complexity can be decomposed == gradient of goal is meaningful. conjecture 2: gradient of goal is meaningful/complexity decomposition implies deceptive alignment (maybe we can also find some necessary condition?)]
how do we formalize retargetability?
maybe something like there exists a homeomorphism from the goal space to NNs with that goal
problem: doesn’t really feel very satisfying and doesn’t work at all for discrete things
maybe complexity: retargetable if it has a really simple map from goals to NNs with goals, conditional on another NN with that goal
problem: the training process of just training another NN from scratch on the new goal and ignoring the given NN could potentially be quite simple
maybe complexity+time: seems reasonable to assume retraining is expensive (and maybe for decomposability we also consider complexity+time)
random idea: the hypothesis that complexity can be approximately decomposed into a goal component and a reasoning component is maybe a good formalization of (a weak version of) orthogonality?