I’m not too keen on (2) since I don’t expect mesa objectives to exist in the relevant sense.
Same, but how optimistic are you that we could figure out how to shape the motivations or internal “goals” (much more loosely defined than “mesa-objective”) of our models via influencing the training objective/reward, the inductive biases of the model, the environments they’re trained in, some combination of these things, etc.?
These aren’t “clean”, in the sense that you don’t get a nice formal guarantee at the end that your AI system is going to (try to) do what you want in all situations, but I think getting an actual literal guarantee is pretty doomed anyway (among other things, it seems hard to get a definition for “all situations” that avoids the no-free-lunch theorem, though I suppose you could get a probabilistic definition based on the simplicity prior).
Yup, if you want “clean,” I agree that you’ll have to either assume a distribution over possible inputs, or identify a perturbation set over possible test environments to avoid NFL.
how optimistic are you that we could figure out how to shape the motivations or internal “goals” (much more loosely defined than “mesa-objective”) of our models via influencing the training objective/reward, the inductive biases of the model, the environments they’re trained in, some combination of these things, etc.?
That seems great, e.g. I think by far the best thing you can do is to make sure that you finetune using a reward function / labeling process that reflects what you actually want (i.e. what people typically call “outer alignment”). I probably should have mentioned that too, I was taking it as a given but I really shouldn’t have.
For inductive biases + environments, I do think controlling those appropriately would be useful and I would view that as an example of (1) in my previous comment.
Same, but how optimistic are you that we could figure out how to shape the motivations or internal “goals” (much more loosely defined than “mesa-objective”) of our models via influencing the training objective/reward, the inductive biases of the model, the environments they’re trained in, some combination of these things, etc.?
Yup, if you want “clean,” I agree that you’ll have to either assume a distribution over possible inputs, or identify a perturbation set over possible test environments to avoid NFL.
That seems great, e.g. I think by far the best thing you can do is to make sure that you finetune using a reward function / labeling process that reflects what you actually want (i.e. what people typically call “outer alignment”). I probably should have mentioned that too, I was taking it as a given but I really shouldn’t have.
For inductive biases + environments, I do think controlling those appropriately would be useful and I would view that as an example of (1) in my previous comment.