From what I understand about AIXI specifically, I agree, but it seems that the most horrible thing that can be built from the corpse of this idea would involve an AI architecture capable of evaluating its utility function on hypothetical states of the world it predicts, so it can compute and choose actions that maximise expected utility.
That’s hard. You’d have to teach AIXI what’s a “paperclip” in every mathematically possible “world”, or somehow single out our “world”. Both problems seem to be outside of our reach at the moment.
From what I understand about AIXI specifically, I agree, but it seems that the most horrible thing that can be built from the corpse of this idea would involve an AI architecture capable of evaluating its utility function on hypothetical states of the world it predicts, so it can compute and choose actions that maximise expected utility.
That’s hard. You’d have to teach AIXI what’s a “paperclip” in every mathematically possible “world”, or somehow single out our “world”. Both problems seem to be outside of our reach at the moment.