Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.