A key issue for distributional shift is that neural nets assign significant “probability” to “crazy” hypotheses. Imagine that we want to train a neural net to classify dogs breeds, and in our training dataset D all huskies are on snow, but on the test dataset D’ they may also be on grass. Then a neural net is perfectly happy with the hypothesis “if most of the bottom half of the image is white, then it is a husky”, whereas humans would see that as crazy and would much prefer the hypothesis “a husky is a large, fluffy, wolf-like dog”, _even if they don’t know what a husky looks like_.
Thus, we might say that the human “prior” over hypotheses is much better than the corresponding neural net “prior”. So, let’s optimize our model using the human prior instead. In particular, we search for a hypothesis such that 1) humans think the hypothesis is likely (high human prior), and 2) the hypothesis leads humans to make good predictions on the training dataset D. Once we have this hypothesis, we have humans make predictions using that hypothesis on the test distribution D’, and train a model to imitate these predictions. We can then use this model to predict for the rest of D’. Notably, this model is now being used in an iid way (i.e. no distribution shift).
A key challenge here is how to represent the hypotheses that we’re optimizing over—they need to be amenable to ML-based optimization, but they also need to be interpretable to humans. A text-based hypothesis would likely be too cumbersome; it is possible that neural-net-based hypotheses could work if augmented by interpretability tools that let the humans understand the “knowledge” in the neural net (this is similar in spirit to <@Microscope AI@>(@Chris Olah’s views on AGI safety@)).
Planned summary for the Alignment Newsletter: