Thanks for reading!
Yes, you can think of it as having a non-corrigible complicated utility function. The relevant utility function is the ‘aggregated utilities’ defined in section 2. I think ‘corrigible’ vs ‘non-corrigible’ is slightly verbal, since it depends on how you define ‘utility’, but the non-verbal question is whether the resulting AI is safer.
Good idea, this is on my agenda!
Looking forward to reading up on geometric rationality in detail. On a quick first pass, looks like geometric rationality is a bit different because it involves deviating from axioms of VNM rationality by using random sampling. By contrast, utility aggregation is consistent with VNM rationality, because it just replaces the ordinary utility function with aggregated utility
Yep that’s right! One complication is maybe the agent could behave this way even though it wasn’t designed to.