You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option. It just doesn’t happen so often because—no wonder—it is heavily selected against. There are lots of physical, psychical and social feedbacks in place that ensure it happens seldom. But that is nothing different from providing comparable training to AIXI.
And it appears that depite of all these checks it is still possible to navigate people out of these checks (which is not much differnt from AIXI deriving solutions evading checks) to commit suicide. I e.g. remember a news story (disclaimer!) where a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI? (“questions ending with a ‘?’ mean no”)
You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option.
Yes, but people generally know what it entails. We don’t want an AI agent to be completely incapable of destroying itself. We don’t want it do destroy itself without a good cause.
Crashing with its spaceship on an incoming asteroid to deflect it away from Earth would be a good cause, for instance.
a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI?
If AIXI had a sufficient amount of experience of the world, I think it couldn’t.
You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option. It just doesn’t happen so often because—no wonder—it is heavily selected against. There are lots of physical, psychical and social feedbacks in place that ensure it happens seldom. But that is nothing different from providing comparable training to AIXI.
And it appears that depite of all these checks it is still possible to navigate people out of these checks (which is not much differnt from AIXI deriving solutions evading checks) to commit suicide. I e.g. remember a news story (disclaimer!) where a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI? (“questions ending with a ‘?’ mean no”)
Yes, but people generally know what it entails.
We don’t want an AI agent to be completely incapable of destroying itself. We don’t want it do destroy itself without a good cause. Crashing with its spaceship on an incoming asteroid to deflect it away from Earth would be a good cause, for instance.
If AIXI had a sufficient amount of experience of the world, I think it couldn’t.