There are several problems here, IMO, but one is that AIXI is being taken rather too seriously.
AIXI is a RL agent with no real conception of its own aims—and so looks as though it will probably wirehead itself at the first available opportunity—and so fail to turn the universe into paperclips.
Wouldn’t a wirehead AI still kill us all, either in order to increase the size of the numbers it can take in or to prevent the feed of high numbers from being switched off?
There are several problems here, IMO, but one is that AIXI is being taken rather too seriously.
AIXI is a RL agent with no real conception of its own aims—and so looks as though it will probably wirehead itself at the first available opportunity—and so fail to turn the universe into paperclips.
Wouldn’t a wirehead AI still kill us all, either in order to increase the size of the numbers it can take in or to prevent the feed of high numbers from being switched off?
Possibly. It may depend on the wirehead.
Heroin addicts sometimes engage in crime—but at other times, they are “on the nod”—and their danger level varies accordingly.
It seems quite possible to imagine a class of machine that loses motivation to do anything—once it has “adjusted” itself.
make that a large class of all possible optimizers.