“Outer alignment” entails having a ground-truth reward function that spits out rewards that agree with what we want. “Inner alignment” is having a learned value function that estimates the value of a plan in a way that agrees with its eventual reward.
I guess just briefly want to flag that I think this summary of inner-vs-outer alignment is confusing in a way that it sounds like one could have a good enough ground-truth reward and then that just has to be internalized.
I think this summary is better: 1. “The AGI was doing the wrong thing but got rewarded anyway (or doing the right thing but got punished)”. 2. Something else went wrong [not easily compressible].
I guess just briefly want to flag that I think this summary of inner-vs-outer alignment is confusing in a way that it sounds like one could have a good enough ground-truth reward and then that just has to be internalized.
I think this summary is better: 1. “The AGI was doing the wrong thing but got rewarded anyway (or doing the right thing but got punished)”. 2. Something else went wrong [not easily compressible].