It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.
It seems hard to me to get information out of the AI without also giving it information. That is, presumably we will configure parts of its environment to correspond to problems in our own world, which necessarily gives some information on our world.
I suppose another option would be that this is a proposal for running AGIs that just run without us ever getting information from. I don’t think that’s what you meant, but thought I’d check.