one idea I’ve been playing with is to have the seed AI make multiple simulations of the entire Earth (i.e., with different “random seeds”), for several years or decades into the future, and have a team of humans pick the best outcome to be released into the real world.
I don’t think that would work. The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time. (In fact, if it turns out that the AI could potentially improve itself constantly over decades, it would need to simulate its smarter future self...)
It might be possible to simulate futures where the AI shuts down after it finishes the simulations(#), except that many of those simulations would likely reach points where another AI is turned on (e.g., by someone who doesn’t agree with the seed AI’s creators), which points function as a “simulation event horizon”.
Note that a seed AI is really unlikely to be even close to good old Omega in power; it would merely be much smarter than a human. (For the purposes of this post I’m assuming on the order of a century for us to develop the seed AI; this doesn’t seem like enough time for humans to build something ridiculously smarter than themselves on their own, and it doesn’t seem safe to allow the seed AI to enhance itself much more than; we might be able to determine the safety of something a bit smarter than we can build ourselves, but that doesn’t seem likely for something a lot smarter.)
(#: Though that leaves the problem that it can’t know the initial state of the simulations with any precision until it finishes the simulations; presumably the psychological impact of seeing some of the possible futures would not be insignificant. But lets say it can find some kind of fixed-point.)
″ The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time.”
Only if the AI is using up a sizeable fraction of resources itself.
Let’s do a thought experiment to see what I mean:
AI runs on some putative hardware running at some multiple of GHZ or petahertz or whatever (X). Hardware has some multiple of GB or Petabytes etc (Y).
Let’s say AI only uses 1% of Y. It can then run up to some 99 instances of itself in parallel with different axioms in order to solve a particular problem and then at the end of the run examine some shared output to see which one of the other 99 ran the problem more efficiently.
Next run, the other 99 processes start with the optimized version of whatever algorithm we came up with.
A compounding interest effect will kick in. But we still have the problem that the runs all take the same time.
Now let’s switch up the experiment a bit: Imagine that the run stops as soon as one of the 99 processes hits the solution.
The evolutionary process starts to speed up, feeding back upon itself.
This is only one way I can think of that a system can simulate itself faster than in realtime as long as sufficient hardware exists to allow the running of multiple copies.
I don’t think we’re discussing about quite the same thing.
I was talking about an AI that attempts to simulate the entire Earth, including itself, in faster than real time (see the quote). Note that this means the AI simulating the behavior of the rest of the world in response to the behavior of the simulated AI, which is somewhat messy even if you ignore the fact that in a faithful simulation the simulated AI would simulate the behavior of the whole world including themselves etc....
When I wrote the original comment I was in fact partly confusing emulating with simulating, as far as I can tell from what I wrote (can’t quite recall and I wouldn’t have trusted the memory if I did). Now, of course an AI can simulate the entire world including itself in faster than real-time. It doesn’t need to be an AI: humans do it all the time.
I’m pretty sure that, in the general case, and barring some exotic physics, no system can emulate itself (nor something containing itself) in faster than real-time.
Also, I’m pretty sure that if we discussed carefully about what we mean by “emulation” and “simulation” we’d generally agree.
My confusion stemmed from the fact that generally on LessWrong, in the context of really powerful AIs, AI simulations can be trusted. (Either it’s smart enough to only pick simplifications that really don’t affect the result, or it’s so powerful that it can in fact emulate the process, meaning it doesn’t simplify it at all and it can still do it faster than real time. Or it’s Omega and it’s just right by definition. Or maybe it can find fixed-points in functions as complex as the future history of the Earth.) I wasn’t careful enough about language.
But in the context of a seed AI, i.e. something much smarter/faster than us but not “godly” yet, and one we don’t trust to pick the best outcome of its possible actions upon the world, I can’t think of any reason we’d trust it to simulate such outcomes well enough for humans to pick among them, as the post I was answering to suggested.
(I mean, it could work for very limited purposes. It might be reasonable to try to change weather based on simulations not much better than what we can do now, for periods of one or a few weeks, but that’s a context where a small mistake would not destroy life on Earth. But look at climate change research and try to extrapolate to people deciding on matters of theogenesis based on simulations from seed AI...)
I don’t think that would work. The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time. (In fact, if it turns out that the AI could potentially improve itself constantly over decades, it would need to simulate its smarter future self...)
It might be possible to simulate futures where the AI shuts down after it finishes the simulations(#), except that many of those simulations would likely reach points where another AI is turned on (e.g., by someone who doesn’t agree with the seed AI’s creators), which points function as a “simulation event horizon”.
Note that a seed AI is really unlikely to be even close to good old Omega in power; it would merely be much smarter than a human. (For the purposes of this post I’m assuming on the order of a century for us to develop the seed AI; this doesn’t seem like enough time for humans to build something ridiculously smarter than themselves on their own, and it doesn’t seem safe to allow the seed AI to enhance itself much more than; we might be able to determine the safety of something a bit smarter than we can build ourselves, but that doesn’t seem likely for something a lot smarter.)
(#: Though that leaves the problem that it can’t know the initial state of the simulations with any precision until it finishes the simulations; presumably the psychological impact of seeing some of the possible futures would not be insignificant. But lets say it can find some kind of fixed-point.)
″ The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time.”
Only if the AI is using up a sizeable fraction of resources itself.
Let’s do a thought experiment to see what I mean:
AI runs on some putative hardware running at some multiple of GHZ or petahertz or whatever (X). Hardware has some multiple of GB or Petabytes etc (Y).
Let’s say AI only uses 1% of Y. It can then run up to some 99 instances of itself in parallel with different axioms in order to solve a particular problem and then at the end of the run examine some shared output to see which one of the other 99 ran the problem more efficiently.
Next run, the other 99 processes start with the optimized version of whatever algorithm we came up with.
A compounding interest effect will kick in. But we still have the problem that the runs all take the same time.
Now let’s switch up the experiment a bit: Imagine that the run stops as soon as one of the 99 processes hits the solution.
The evolutionary process starts to speed up, feeding back upon itself.
This is only one way I can think of that a system can simulate itself faster than in realtime as long as sufficient hardware exists to allow the running of multiple copies.
I don’t think we’re discussing about quite the same thing.
I was talking about an AI that attempts to simulate the entire Earth, including itself, in faster than real time (see the quote). Note that this means the AI simulating the behavior of the rest of the world in response to the behavior of the simulated AI, which is somewhat messy even if you ignore the fact that in a faithful simulation the simulated AI would simulate the behavior of the whole world including themselves etc....
When I wrote the original comment I was in fact partly confusing emulating with simulating, as far as I can tell from what I wrote (can’t quite recall and I wouldn’t have trusted the memory if I did). Now, of course an AI can simulate the entire world including itself in faster than real-time. It doesn’t need to be an AI: humans do it all the time.
I’m pretty sure that, in the general case, and barring some exotic physics, no system can emulate itself (nor something containing itself) in faster than real-time.
Also, I’m pretty sure that if we discussed carefully about what we mean by “emulation” and “simulation” we’d generally agree.
My confusion stemmed from the fact that generally on LessWrong, in the context of really powerful AIs, AI simulations can be trusted. (Either it’s smart enough to only pick simplifications that really don’t affect the result, or it’s so powerful that it can in fact emulate the process, meaning it doesn’t simplify it at all and it can still do it faster than real time. Or it’s Omega and it’s just right by definition. Or maybe it can find fixed-points in functions as complex as the future history of the Earth.) I wasn’t careful enough about language.
But in the context of a seed AI, i.e. something much smarter/faster than us but not “godly” yet, and one we don’t trust to pick the best outcome of its possible actions upon the world, I can’t think of any reason we’d trust it to simulate such outcomes well enough for humans to pick among them, as the post I was answering to suggested.
(I mean, it could work for very limited purposes. It might be reasonable to try to change weather based on simulations not much better than what we can do now, for periods of one or a few weeks, but that’s a context where a small mistake would not destroy life on Earth. But look at climate change research and try to extrapolate to people deciding on matters of theogenesis based on simulations from seed AI...)