I suppose the problem comes when the AI starts to communicate with us. There would be a lot of information that they could exploit. Even if they don’t get any sense of our physics, if they are able to model us we might be in trouble. And even if we didn’t give them any direct communication (for example manifesting puzzles in their world, the solution of which would allow us to solve our own questions), they might promote simulation to a reasonable hypothesis.
It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.
I suppose the problem comes when the AI starts to communicate with us. There would be a lot of information that they could exploit. Even if they don’t get any sense of our physics, if they are able to model us we might be in trouble. And even if we didn’t give them any direct communication (for example manifesting puzzles in their world, the solution of which would allow us to solve our own questions), they might promote simulation to a reasonable hypothesis.
EY wrote a story that serves as an intuition pump here.
I agree that there is practically no purpose to using this kind of method if you are just going to give the AI information about our reality anyway.
It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.