Robin: But can we also agree that the state of a “grown” AI program will depend on the environment in which it was “raised”?
It will depend on the environment in a way that it depends on its initial conditions. It will depend on the environment if it was designed to depend on the environment. The reason, presumably, why the AI is not inert in the face of the environment, like a heap of sand, is that someone went to the work of turning that silicon into an AI. Each bit of internal state change will happen because of a program that the programmer wrote, or that the AI programmed by the programmer wrote, and the chain of causality will stretch back, lawfully.
With all those provisos, yes, the grown AI will depend on the environment. Though to avoid the Detached Lever fallacy, it might be helpful to say: “The grown AI will depend on how you programmed the child AI to depend on the environment.”
Doug: You have to be awake in order to recognize an apple
Robin: But can we also agree that the state of a “grown” AI program will depend on the environment in which it was “raised”?
It will depend on the environment in a way that it depends on its initial conditions. It will depend on the environment if it was designed to depend on the environment. The reason, presumably, why the AI is not inert in the face of the environment, like a heap of sand, is that someone went to the work of turning that silicon into an AI. Each bit of internal state change will happen because of a program that the programmer wrote, or that the AI programmed by the programmer wrote, and the chain of causality will stretch back, lawfully.
With all those provisos, yes, the grown AI will depend on the environment. Though to avoid the Detached Lever fallacy, it might be helpful to say: “The grown AI will depend on how you programmed the child AI to depend on the environment.”
Doug: You have to be awake in order to recognize an apple
Dream on.