Interestingly the problems of AIXI are not much different from corresponding ones for human rationality:
immortalism—humans also don’t grasp death on any deeper level than AIXI. They also drop anvils on their head so to speak, i.e. they misinterpret reality to a) be less dangerous than ‘expected’ or ignored (esp. small children) or b) to contain an afterlife (kind of updating against the a) view later. This is for the same reason AIXI does. Symbolic reasoning about reality.
preference solipsism—Same here. Reasoning needs some priors. These form from the body the mind is ‘trapped’ in. But the mind doesn’t neccessarily blieve that it is trapped. The body provides the ‘reward’ button. And that button can be hacked (drugs, optimizing for single values).
lack of self-improvement—To improve itself AIXI has to be taught via reward what to prefer and its means to achieve this thus follow complex causality. It could work, but it it no direct approach. Humans also cannot directly improve themselves. The desires (reward) for self-improvement needs to be put into neurobio science and that into cyber (or whatnot) enhancements (or uploading).
humans also don’t grasp death on any deeper level than AIXI. They also drop anvils on their head so to speak
If human adults didn’t grasp death any better than AIXI does, they’d routinely drop anvils on their heads literally, not ‘so to speak’.
This is for the same reason AIXI does. Symbolic reasoning about reality.
What do you mean? What would be the alternative to ‘symbolic reasoning’?
The body provides the ‘reward’ button. And that button can be hacked (drugs, optimizing for single values).
If a smart AI values things about the world outside its head, it won’t deliberately hack itself (e.g., it won’t alter its hardware to entertain happy delusions), because it won’t expect a policy of self-hacking to make the world actually better. It’s the actual world it cares about, not its beliefs about, preferences over, or enjoyable experiences of the world.
The problem with AIXI isn’t that it lacks the data or technology needed to self-modify. It’s that it has an unrealistic prior. These aren’t problems shared by humans. Humans form approximately accurate models of how new drugs, food, injuries, etc. will affect their minds, and respond accordingly. They don’t always do so, but AIXI is special because it can never do so, even when given unboundedly great computing power and arbitrarily large supplies of representative data.
If human adults didn’t grasp death any better than AIXI does, they’d routinely drop anvils on their heads literally, not ‘so to speak’.
AIXI doesn’t necessarily drop an anvil on it’s head. It just doesn’t believe that it’s input sequence can ever stop, no matter what happens. This seems to me like what the vast majority of humans believe.
For clarity: are you referring to belief in an afterlife/reincarnation? Or are you saying that most humans are not mindful most of the time of their own mortality?
AIXI is special because it can never do so, even when given unboundedly great computing power and arbitrarily large supplies of representative data.
You keep saying things like this. Why are you so convinced that “wrong” epistemology has shorter K-complexity than an epistemology capable of knowing that it’s embodied? What are the causes of your knowledge that you are embodied?
Humans form approximately accurate models of how new drugs, food, injuries, etc. will affect their minds, and respond accordingly.
If you coerce AIXI with sufficiently tricky rewards (and nothing else is our elvolved body doing with our developing brain) ro form ‘approximately accurate models’ AIXI will also respond accordingly. Except
They don’t always do so
When it doesn’t do so either because it has learned that it can get around this coercing. Same with humans which may also may come to think that they can get around their body and go to heaven, take drug...
If human adults didn’t grasp death any better than AIXI does, they’d routinely drop anvils on their heads literally, not ‘so to speak’.
AIXI wouldn’t either if you coerced it like our body (and society) does us.
This is for the same reason AIXI does. Symbolic reasoning about reality.
What do you mean? What would be the alternative to ‘symbolic reasoning’?
I don’t say that there is an alternative. It means that symbolic reasoning needs some base. Axioms, goals states. Where do you get these from? In the human brain these form stabilizing neural nets thus representing approximations of vague interrelated representations of reality. But you have no cognitive access to this fuzzy-to-symbolic-relation, only to its mentalese correlate—the symbols you reason with. Whatever you derive from the symbols is in the same way separate from reality as the cartesian barrier of AIXI.
Interestingly the problems of AIXI are not much different from corresponding ones for human rationality:
immortalism—humans also don’t grasp death on any deeper level than AIXI. They also drop anvils on their head so to speak, i.e. they misinterpret reality to a) be less dangerous than ‘expected’ or ignored (esp. small children) or b) to contain an afterlife (kind of updating against the a) view later. This is for the same reason AIXI does. Symbolic reasoning about reality.
preference solipsism—Same here. Reasoning needs some priors. These form from the body the mind is ‘trapped’ in. But the mind doesn’t neccessarily blieve that it is trapped. The body provides the ‘reward’ button. And that button can be hacked (drugs, optimizing for single values).
lack of self-improvement—To improve itself AIXI has to be taught via reward what to prefer and its means to achieve this thus follow complex causality. It could work, but it it no direct approach. Humans also cannot directly improve themselves. The desires (reward) for self-improvement needs to be put into neurobio science and that into cyber (or whatnot) enhancements (or uploading).
If human adults didn’t grasp death any better than AIXI does, they’d routinely drop anvils on their heads literally, not ‘so to speak’.
What do you mean? What would be the alternative to ‘symbolic reasoning’?
If a smart AI values things about the world outside its head, it won’t deliberately hack itself (e.g., it won’t alter its hardware to entertain happy delusions), because it won’t expect a policy of self-hacking to make the world actually better. It’s the actual world it cares about, not its beliefs about, preferences over, or enjoyable experiences of the world.
The problem with AIXI isn’t that it lacks the data or technology needed to self-modify. It’s that it has an unrealistic prior. These aren’t problems shared by humans. Humans form approximately accurate models of how new drugs, food, injuries, etc. will affect their minds, and respond accordingly. They don’t always do so, but AIXI is special because it can never do so, even when given unboundedly great computing power and arbitrarily large supplies of representative data.
AIXI doesn’t necessarily drop an anvil on it’s head. It just doesn’t believe that it’s input sequence can ever stop, no matter what happens. This seems to me like what the vast majority of humans believe.
For clarity: are you referring to belief in an afterlife/reincarnation? Or are you saying that most humans are not mindful most of the time of their own mortality?
I am referring to an afterlife of some kind.
You keep saying things like this. Why are you so convinced that “wrong” epistemology has shorter K-complexity than an epistemology capable of knowing that it’s embodied? What are the causes of your knowledge that you are embodied?
I disagree.
If you coerce AIXI with sufficiently tricky rewards (and nothing else is our elvolved body doing with our developing brain) ro form ‘approximately accurate models’ AIXI will also respond accordingly. Except
When it doesn’t do so either because it has learned that it can get around this coercing. Same with humans which may also may come to think that they can get around their body and go to heaven, take drug...
AIXI wouldn’t either if you coerced it like our body (and society) does us.
I don’t say that there is an alternative. It means that symbolic reasoning needs some base. Axioms, goals states. Where do you get these from? In the human brain these form stabilizing neural nets thus representing approximations of vague interrelated representations of reality. But you have no cognitive access to this fuzzy-to-symbolic-relation, only to its mentalese correlate—the symbols you reason with. Whatever you derive from the symbols is in the same way separate from reality as the cartesian barrier of AIXI.
Added: See http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/