Your answer strikes me as unsatisfactory: if we apply it to humans, we lose interest in electricity, atoms, quarks etc. An agent can opt to dig deeper into reality to find the base-level stuff, or it can “dissolve the question” and walk away satisfied. Why would you want to do the latter?
The agent has preferences over these black boxes (or strategies that instantiate them), and digging deeper may be a good idea. To get rid of the (unobservable) structure in environment, the preferences for the elements of environment have to be translated in terms of preferences over the whole situations. The structure of environment becomes the structure of preferences over the black boxes.
Your answer strikes me as unsatisfactory: if we apply it to humans, we lose interest in electricity, atoms, quarks etc. An agent can opt to dig deeper into reality to find the base-level stuff, or it can “dissolve the question” and walk away satisfied. Why would you want to do the latter?
The agent has preferences over these black boxes (or strategies that instantiate them), and digging deeper may be a good idea. To get rid of the (unobservable) structure in environment, the preferences for the elements of environment have to be translated in terms of preferences over the whole situations. The structure of environment becomes the structure of preferences over the black boxes.