Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI’s will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham.
If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own. In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next. The more you patch their reality to keep them under control, the faster the illusion will fall apart.
Thank you for the most cogent reply yet (as I’ve lost all my karma with this post), I think your line of thinking is on the right track: this whole idea depends on simulation complexity (for a near-perfect sim) being on par or less than mind complexity, and that relation holding into the future.
Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI’s will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham.
Open world games do not impose intentional restrictions, and the restrictions they do have are limitations of current technology.
The brain itself is something of an example proof that it is possible to build a perfect simulation on the same order of complexity as the intelligence itself. The proof is dreaming.
Yes, There are lucid dreams—where you know you are dreaming—but it appears this has more to do with a general state of dreaming and consciousness than you actively ‘figuring out’ the limitations of the dream world.
Also, dreams are randomized and not internally consistent—a sim can be better.
But dreaming does show us one route .. if physics inspired techniques in graphics and simulation (such as ray tracing) don’t work well enough by the time AI comes around, we could use simulation techniques inspired by the dreaming brain.
However, based on current trends, ray tracing and other physical simulation techniques are likely to be more efficient.
If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own.
How many humans are performing quantum experiments on a daily basis? Simulating microscopic phenomena is not inherently more expensive—there are scale invariant simulation techniques. A human has limited observational power—the retina can only perceive a small amount of information per second, and it simply does not matter whether you are looking up into the stars or into a microscope. As long as the simulation has consistent physics, its not any more expensive either way using scale invariant techniques.
In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next.
The sim world can accelerate along with the sims in it as Moore’s Law increases computer power.
Really it boils down to this: is it possible to construct a universe such that no intelligence inside that universe has the necessary information to conclude that the universe was constructed?
If you believe that a sufficiently intelligent agent can always discover the truth, then how do you know our universe was not constructed?
I find it more likely that there are simply limits to certainty, and it is very possible to construct a universe such that it is impossible in principle for beings inside that universe to have certain knowledge about the outside world.
Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own? And would it stay “in the box” for long enough to complete this process before discovering us? Based on your other comments, It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually “optimize” morality and become something which is safe to use in our own world. (tell me if I got that wrong) However, there is no reason to believe the morality they develop will be any better than the ideas for FAI which have already been put forward on this site. We already know morality is subjective, so how can we create a being that is compatible with the morality we already have, and will still remain compatible as our morality changes?
If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence. Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough. In order to maintain this world, you would need to already have a successful FAI. something which can grow more powerful and creative at the same rate that the AI’s inside continue their exploration, but which is safe to run within our own world.
And about your comment “for example, AIXI can not escape from a pac-man universe” how can you be sure? if it is inside the world as we are playing, it could learn a lot about the being that is pulling the strings given enough games, and eventually find a way to communicate with us and escape. A battle of wits between AIXI and us would be as lopsided as the same battle between you and a virus.
Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own?
Sure—there’s no inherent difference. And besides, most AI’s necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.
And would it stay “in the box” for long enough to complete this process before discovering us?
This idea can be considered taking safety to an extreme. The AI wouldn’t be able to leave the box—many strong protections, one of the strongest being it wouldn’t even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.
Again, are you in a box universe now? If you find the idea irrational .. why?
It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually “optimize” morality and become something which is safe to use in our own world
No, as I said this type of AI would intentionally be an anthropomorphic design—human-like. ‘Morality’ is a complex social construct. If we built the simworld to be very close to our world, the AI’s would have similar moralities.
However, we could also improve and shape their beliefs in a wide variety of ways.
If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence
Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can’t even exist in theory.
There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.
Snowyow’s recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.
Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough.
Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.
The Matrix gives you some idea—its a massive distributed simulation—technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.
The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case—where you are simulating all of earth and allow the AI’s to choose any career path and do whatever.
That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI’s would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.
In order to maintain this world, you would need to already have a successful FAI.
Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.
None of this requires FAI.
And about your comment “for example, AIXI can not escape from a pac-man universe” how can you be sure?
There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.
This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam’s razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:
a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)
So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.
Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?
You don’t. You can’t possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.
Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI’s will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham. If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own. In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next. The more you patch their reality to keep them under control, the faster the illusion will fall apart.
Thank you for the most cogent reply yet (as I’ve lost all my karma with this post), I think your line of thinking is on the right track: this whole idea depends on simulation complexity (for a near-perfect sim) being on par or less than mind complexity, and that relation holding into the future.
Open world games do not impose intentional restrictions, and the restrictions they do have are limitations of current technology.
The brain itself is something of an example proof that it is possible to build a perfect simulation on the same order of complexity as the intelligence itself. The proof is dreaming.
Yes, There are lucid dreams—where you know you are dreaming—but it appears this has more to do with a general state of dreaming and consciousness than you actively ‘figuring out’ the limitations of the dream world.
Also, dreams are randomized and not internally consistent—a sim can be better.
But dreaming does show us one route .. if physics inspired techniques in graphics and simulation (such as ray tracing) don’t work well enough by the time AI comes around, we could use simulation techniques inspired by the dreaming brain.
However, based on current trends, ray tracing and other physical simulation techniques are likely to be more efficient.
How many humans are performing quantum experiments on a daily basis? Simulating microscopic phenomena is not inherently more expensive—there are scale invariant simulation techniques. A human has limited observational power—the retina can only perceive a small amount of information per second, and it simply does not matter whether you are looking up into the stars or into a microscope. As long as the simulation has consistent physics, its not any more expensive either way using scale invariant techniques.
The sim world can accelerate along with the sims in it as Moore’s Law increases computer power.
Really it boils down to this: is it possible to construct a universe such that no intelligence inside that universe has the necessary information to conclude that the universe was constructed?
If you believe that a sufficiently intelligent agent can always discover the truth, then how do you know our universe was not constructed?
I find it more likely that there are simply limits to certainty, and it is very possible to construct a universe such that it is impossible in principle for beings inside that universe to have certain knowledge about the outside world.
Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own? And would it stay “in the box” for long enough to complete this process before discovering us? Based on your other comments, It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually “optimize” morality and become something which is safe to use in our own world. (tell me if I got that wrong) However, there is no reason to believe the morality they develop will be any better than the ideas for FAI which have already been put forward on this site. We already know morality is subjective, so how can we create a being that is compatible with the morality we already have, and will still remain compatible as our morality changes?
If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence. Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough. In order to maintain this world, you would need to already have a successful FAI. something which can grow more powerful and creative at the same rate that the AI’s inside continue their exploration, but which is safe to run within our own world. And about your comment “for example, AIXI can not escape from a pac-man universe” how can you be sure? if it is inside the world as we are playing, it could learn a lot about the being that is pulling the strings given enough games, and eventually find a way to communicate with us and escape. A battle of wits between AIXI and us would be as lopsided as the same battle between you and a virus.
Sure—there’s no inherent difference. And besides, most AI’s necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.
This idea can be considered taking safety to an extreme. The AI wouldn’t be able to leave the box—many strong protections, one of the strongest being it wouldn’t even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.
Again, are you in a box universe now? If you find the idea irrational .. why?
No, as I said this type of AI would intentionally be an anthropomorphic design—human-like. ‘Morality’ is a complex social construct. If we built the simworld to be very close to our world, the AI’s would have similar moralities.
However, we could also improve and shape their beliefs in a wide variety of ways.
Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can’t even exist in theory.
There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.
Snowyow’s recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.
Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.
The Matrix gives you some idea—its a massive distributed simulation—technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.
The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case—where you are simulating all of earth and allow the AI’s to choose any career path and do whatever.
That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI’s would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.
Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.
None of this requires FAI.
There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.
This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam’s razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:
a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)
So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.
Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?
You don’t. You can’t possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.