Actually, it’s quite surprising that nobody who (publically) cares about AI risk has, to the best of my knowledge, even tried to extend the AIXI framework to incorporate some notion of friendliness...
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn’t incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn’t incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.