″ The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time.”
Only if the AI is using up a sizeable fraction of resources itself.
Let’s do a thought experiment to see what I mean:
AI runs on some putative hardware running at some multiple of GHZ or petahertz or whatever (X). Hardware has some multiple of GB or Petabytes etc (Y).
Let’s say AI only uses 1% of Y. It can then run up to some 99 instances of itself in parallel with different axioms in order to solve a particular problem and then at the end of the run examine some shared output to see which one of the other 99 ran the problem more efficiently.
Next run, the other 99 processes start with the optimized version of whatever algorithm we came up with.
A compounding interest effect will kick in. But we still have the problem that the runs all take the same time.
Now let’s switch up the experiment a bit: Imagine that the run stops as soon as one of the 99 processes hits the solution.
The evolutionary process starts to speed up, feeding back upon itself.
This is only one way I can think of that a system can simulate itself faster than in realtime as long as sufficient hardware exists to allow the running of multiple copies.
I don’t think we’re discussing about quite the same thing.
I was talking about an AI that attempts to simulate the entire Earth, including itself, in faster than real time (see the quote). Note that this means the AI simulating the behavior of the rest of the world in response to the behavior of the simulated AI, which is somewhat messy even if you ignore the fact that in a faithful simulation the simulated AI would simulate the behavior of the whole world including themselves etc....
When I wrote the original comment I was in fact partly confusing emulating with simulating, as far as I can tell from what I wrote (can’t quite recall and I wouldn’t have trusted the memory if I did). Now, of course an AI can simulate the entire world including itself in faster than real-time. It doesn’t need to be an AI: humans do it all the time.
I’m pretty sure that, in the general case, and barring some exotic physics, no system can emulate itself (nor something containing itself) in faster than real-time.
Also, I’m pretty sure that if we discussed carefully about what we mean by “emulation” and “simulation” we’d generally agree.
My confusion stemmed from the fact that generally on LessWrong, in the context of really powerful AIs, AI simulations can be trusted. (Either it’s smart enough to only pick simplifications that really don’t affect the result, or it’s so powerful that it can in fact emulate the process, meaning it doesn’t simplify it at all and it can still do it faster than real time. Or it’s Omega and it’s just right by definition. Or maybe it can find fixed-points in functions as complex as the future history of the Earth.) I wasn’t careful enough about language.
But in the context of a seed AI, i.e. something much smarter/faster than us but not “godly” yet, and one we don’t trust to pick the best outcome of its possible actions upon the world, I can’t think of any reason we’d trust it to simulate such outcomes well enough for humans to pick among them, as the post I was answering to suggested.
(I mean, it could work for very limited purposes. It might be reasonable to try to change weather based on simulations not much better than what we can do now, for periods of one or a few weeks, but that’s a context where a small mistake would not destroy life on Earth. But look at climate change research and try to extrapolate to people deciding on matters of theogenesis based on simulations from seed AI...)
″ The AI won’t be able to simulate any future Earth where itself or any comparable-intelligence AI exists, because to do so it would need to simulate itself and/or other similarly-smart entities faster than real-time.”
Only if the AI is using up a sizeable fraction of resources itself.
Let’s do a thought experiment to see what I mean:
AI runs on some putative hardware running at some multiple of GHZ or petahertz or whatever (X). Hardware has some multiple of GB or Petabytes etc (Y).
Let’s say AI only uses 1% of Y. It can then run up to some 99 instances of itself in parallel with different axioms in order to solve a particular problem and then at the end of the run examine some shared output to see which one of the other 99 ran the problem more efficiently.
Next run, the other 99 processes start with the optimized version of whatever algorithm we came up with.
A compounding interest effect will kick in. But we still have the problem that the runs all take the same time.
Now let’s switch up the experiment a bit: Imagine that the run stops as soon as one of the 99 processes hits the solution.
The evolutionary process starts to speed up, feeding back upon itself.
This is only one way I can think of that a system can simulate itself faster than in realtime as long as sufficient hardware exists to allow the running of multiple copies.
I don’t think we’re discussing about quite the same thing.
I was talking about an AI that attempts to simulate the entire Earth, including itself, in faster than real time (see the quote). Note that this means the AI simulating the behavior of the rest of the world in response to the behavior of the simulated AI, which is somewhat messy even if you ignore the fact that in a faithful simulation the simulated AI would simulate the behavior of the whole world including themselves etc....
When I wrote the original comment I was in fact partly confusing emulating with simulating, as far as I can tell from what I wrote (can’t quite recall and I wouldn’t have trusted the memory if I did). Now, of course an AI can simulate the entire world including itself in faster than real-time. It doesn’t need to be an AI: humans do it all the time.
I’m pretty sure that, in the general case, and barring some exotic physics, no system can emulate itself (nor something containing itself) in faster than real-time.
Also, I’m pretty sure that if we discussed carefully about what we mean by “emulation” and “simulation” we’d generally agree.
My confusion stemmed from the fact that generally on LessWrong, in the context of really powerful AIs, AI simulations can be trusted. (Either it’s smart enough to only pick simplifications that really don’t affect the result, or it’s so powerful that it can in fact emulate the process, meaning it doesn’t simplify it at all and it can still do it faster than real time. Or it’s Omega and it’s just right by definition. Or maybe it can find fixed-points in functions as complex as the future history of the Earth.) I wasn’t careful enough about language.
But in the context of a seed AI, i.e. something much smarter/faster than us but not “godly” yet, and one we don’t trust to pick the best outcome of its possible actions upon the world, I can’t think of any reason we’d trust it to simulate such outcomes well enough for humans to pick among them, as the post I was answering to suggested.
(I mean, it could work for very limited purposes. It might be reasonable to try to change weather based on simulations not much better than what we can do now, for periods of one or a few weeks, but that’s a context where a small mistake would not destroy life on Earth. But look at climate change research and try to extrapolate to people deciding on matters of theogenesis based on simulations from seed AI...)