Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own?
Sure—there’s no inherent difference. And besides, most AI’s necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.
And would it stay “in the box” for long enough to complete this process before discovering us?
This idea can be considered taking safety to an extreme. The AI wouldn’t be able to leave the box—many strong protections, one of the strongest being it wouldn’t even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.
Again, are you in a box universe now? If you find the idea irrational .. why?
It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually “optimize” morality and become something which is safe to use in our own world
No, as I said this type of AI would intentionally be an anthropomorphic design—human-like. ‘Morality’ is a complex social construct. If we built the simworld to be very close to our world, the AI’s would have similar moralities.
However, we could also improve and shape their beliefs in a wide variety of ways.
If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence
Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can’t even exist in theory.
There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.
Snowyow’s recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.
Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough.
Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.
The Matrix gives you some idea—its a massive distributed simulation—technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.
The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case—where you are simulating all of earth and allow the AI’s to choose any career path and do whatever.
That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI’s would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.
In order to maintain this world, you would need to already have a successful FAI.
Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.
None of this requires FAI.
And about your comment “for example, AIXI can not escape from a pac-man universe” how can you be sure?
There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.
This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam’s razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:
a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)
So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.
Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?
You don’t. You can’t possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.
Sure—there’s no inherent difference. And besides, most AI’s necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.
This idea can be considered taking safety to an extreme. The AI wouldn’t be able to leave the box—many strong protections, one of the strongest being it wouldn’t even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.
Again, are you in a box universe now? If you find the idea irrational .. why?
No, as I said this type of AI would intentionally be an anthropomorphic design—human-like. ‘Morality’ is a complex social construct. If we built the simworld to be very close to our world, the AI’s would have similar moralities.
However, we could also improve and shape their beliefs in a wide variety of ways.
Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can’t even exist in theory.
There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.
Snowyow’s recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.
Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.
The Matrix gives you some idea—its a massive distributed simulation—technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.
The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case—where you are simulating all of earth and allow the AI’s to choose any career path and do whatever.
That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI’s would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.
Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.
None of this requires FAI.
There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.
This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam’s razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:
a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)
So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.
Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?
You don’t. You can’t possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.