An AI living in a simulated universe can be just as intelligent as one living in the real world.
It can be a very good theorem prover, sure. But without access to information about the world, it can’t answer questions like “what is the CEV of humanity like” or “what’s the best way I can make a lot of money” or “translate this book from English to Finnish so that a native speaker will consider it a good translation”. It’s narrow AI, even if it could be broad AI if it were given more information.
It can be a very good theorem prover, sure. But without access to information about the world, it can’t answer questions like “what is the CEV of humanity like” or “what’s the best way I can make a lot of money” or “translate this book from English to Finnish so that a native speaker will consider it a good translation”. It’s narrow AI, even if it could be broad AI if it were given more information.