It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.
I agree that there is practically no purpose to using this kind of method if you are just going to give the AI information about our reality anyway.
It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.