I think there is an additional effect related to “optimization is not conditioning” that stems from the fact that causation is not correlation. Suppose for argument’s sake that people evaluate alignment research partly based on where it’s come from (which the machine cannot control). Then producing good alignment research by regular standards is not enough to get high ratings. If a system manages to get good ratings anyway, then the actual papers it’s producing must be quite different to typical highly rated alignment papers, because they are somehow compensating for the penalty incurred by coming from the wrong source. In such a situation, I think it would not be surprising if the previously observed relationship between ratings and quality did not continue to hold.
This is similar to “causal Goodhart” in Garrabrant’s taxonomy, but I don’t think it’s quite identical. It’s ambiguous whether ratings are being “intervened on” in this situation, and actual quality is probably going to be affected somewhat. I could see it as a generalised version of causal Goodhart, where intervening on the proxy is what happens when this effect is particularly extreme.
I think there is an additional effect related to “optimization is not conditioning” that stems from the fact that causation is not correlation. Suppose for argument’s sake that people evaluate alignment research partly based on where it’s come from (which the machine cannot control). Then producing good alignment research by regular standards is not enough to get high ratings. If a system manages to get good ratings anyway, then the actual papers it’s producing must be quite different to typical highly rated alignment papers, because they are somehow compensating for the penalty incurred by coming from the wrong source. In such a situation, I think it would not be surprising if the previously observed relationship between ratings and quality did not continue to hold.
This is similar to “causal Goodhart” in Garrabrant’s taxonomy, but I don’t think it’s quite identical. It’s ambiguous whether ratings are being “intervened on” in this situation, and actual quality is probably going to be affected somewhat. I could see it as a generalised version of causal Goodhart, where intervening on the proxy is what happens when this effect is particularly extreme.
I think this is more like Extremal Goodhart in Garrabrant’s taxonomy: there’s a distributional shift inherent to high U.
Maybe it’s similar, but high U is not necessary