Sorry for the stupid question, but what’s the difference between “boundedly-rational agent pursuing a reward function” and “any sort of agent pursuing a reward function”?
A boundedly-rational agent is assumed to be mostly rational, failing to be fully rational because of a failure to figure things out in enough detail.
Humans are occasionally rational, often biased, often inconsistent, sometimes consciously act against their best interests, often follow heuristics without thinking, sometimes do think things through. This doesn’t seem to correspond to what is normally understood as “boundedly-rational”.
Gotcha, thanks. I have corrected my comment two above by striking out the words “boundedly-rational”, but I think the point of that comment still stands.
Sorry for the stupid question, but what’s the difference between “boundedly-rational agent pursuing a reward function” and “any sort of agent pursuing a reward function”?
A boundedly-rational agent is assumed to be mostly rational, failing to be fully rational because of a failure to figure things out in enough detail.
Humans are occasionally rational, often biased, often inconsistent, sometimes consciously act against their best interests, often follow heuristics without thinking, sometimes do think things through. This doesn’t seem to correspond to what is normally understood as “boundedly-rational”.
Gotcha, thanks. I have corrected my comment two above by striking out the words “boundedly-rational”, but I think the point of that comment still stands.