Probably the single biggest blackbox in classic decision theory is the Cartesian assumption of an ‘agent’ which is an atomic entity sealed inside a barrier which is communicating with/acting in an environment which has no access to or prediction about the agent.
I’ve always understood this to be the equivalent of spherical cows.
Of course it doesn’t, and can’t actually, exist in the real world.
Considering that for real humans even something as simple as severing the connection between left and right hemispheres of the brain causes identity to break down, there likely are no human ‘agents’ in the strict sense.
I’ve always understood this to be the equivalent of spherical cows.
Of course it doesn’t, and can’t actually, exist in the real world.
Considering that for real humans even something as simple as severing the connection between left and right hemispheres of the brain causes identity to break down, there likely are no human ‘agents’ in the strict sense.