Instrumentality exists on the simulacra level, not the simulator level. This would suggest that corrigibility could be maintained by establishing a corrigible character in context. Not clear on the practical implications.
That one, yup. The moment you start conditioning (through prompting, fine tuning, or otherwise) the predictor into narrower spaces of action, you can induce predictions corresponding to longer term goals and instrumental behavior. Effective longer-term planning requires greater capability, so one should expect this kind of thing to be more apparent as models get stronger even as the base models can be correctly claimed to have ‘zero’ instrumentality.
In other words, the claims about simulators here are quite narrow. It’s pretty easy to end up thinking that this is useless if the apparent-nice-property gets deleted the moment you use the thing, but I’d argue that this is actually still a really good foundation. A longer version was the goal agnosticism FAQ, and there’s this RL comment poking at some adjacent and relevant intuitions, but I haven’t written up how all the pieces come together. A short version would be that I’m pretty optimistic at the moment about what path to capabilities greedy incentives are going to push us down, and I strongly suspect that the scariest possible architectures/techniques are actually repulsive to the optimizer-that-the-AI-industry-is.
A short version would be that I’m pretty optimistic at the moment about what path to capabilities greedy incentives are going to push us down, and I strongly suspect that the scariest possible architectures/techniques are actually repulsive to the optimizer-that-the-AI-industry-is.
To uncover the generators of this, I think one of the reasons for this is because inductive biases turned out to matter little, enabling you to avoid having to do simulated evolution, which is where I think a lot of danger lies, combined with sparse RL not generally working very well on low compute, and AI early on needing a surprising amount of structure/world models, allowing you to somewhat safely automate research.
That one, yup. The moment you start conditioning (through prompting, fine tuning, or otherwise) the predictor into narrower spaces of action, you can induce predictions corresponding to longer term goals and instrumental behavior. Effective longer-term planning requires greater capability, so one should expect this kind of thing to be more apparent as models get stronger even as the base models can be correctly claimed to have ‘zero’ instrumentality.
In other words, the claims about simulators here are quite narrow. It’s pretty easy to end up thinking that this is useless if the apparent-nice-property gets deleted the moment you use the thing, but I’d argue that this is actually still a really good foundation. A longer version was the goal agnosticism FAQ, and there’s this RL comment poking at some adjacent and relevant intuitions, but I haven’t written up how all the pieces come together. A short version would be that I’m pretty optimistic at the moment about what path to capabilities greedy incentives are going to push us down, and I strongly suspect that the scariest possible architectures/techniques are actually repulsive to the optimizer-that-the-AI-industry-is.
To uncover the generators of this, I think one of the reasons for this is because inductive biases turned out to matter little, enabling you to avoid having to do simulated evolution, which is where I think a lot of danger lies, combined with sparse RL not generally working very well on low compute, and AI early on needing a surprising amount of structure/world models, allowing you to somewhat safely automate research.