Let’s suppose that your model makes a bad action. Why? Either the model is aligned but uncapable to deduce good action or the model is misaligned and uncapable to deduce deceptively good action. In both cases, gradient update provides information about capabilities, not about alignment. Hypothetical homunculi doesn’t need to be “immune”, it isn’t affected in a first place.
Other way around: let’s suppose that you observe model taking a good action. Why? It can be an aligned model that makes a genuine good action or it can be a misaligned model which takes a deceptive action. In both cases you observe capabilities, not alignment.
The problem here is not a prior over aligned/deceptive models (unless you think that this prior requires less than 1 bit to specify aligned model, where I say that optimism departs from sanity here), the problem is lack of understanding of updates which presumably should cause model to be aligned. Maybe prosaic alignment works, maybe don’t, we don’t know how to check.
Let’s suppose that your model makes a bad action. Why? Either the model is aligned but uncapable to deduce good action or the model is misaligned and uncapable to deduce deceptively good action. In both cases, gradient update provides information about capabilities, not about alignment. Hypothetical homunculi doesn’t need to be “immune”, it isn’t affected in a first place.
Other way around: let’s suppose that you observe model taking a good action. Why? It can be an aligned model that makes a genuine good action or it can be a misaligned model which takes a deceptive action. In both cases you observe capabilities, not alignment.
The problem here is not a prior over aligned/deceptive models (unless you think that this prior requires less than 1 bit to specify aligned model, where I say that optimism departs from sanity here), the problem is lack of understanding of updates which presumably should cause model to be aligned. Maybe prosaic alignment works, maybe don’t, we don’t know how to check.