I don’t think the math models motivation anyway. It’s abstracted away, leaving each agent maximising a utility function. Neither is utility in the model (which is well defined) isomorphic to utility for a person making decisions in the real world (which is not). But our minds seem to learn things better when they are couched in terms of a story about people.
Hmm. Possibly one danger in this is assuming that your own internal story about what the equations mean is what they actually mean, such that you end up overconfident that the results of a decision in the real world will be like the story in your head.
I don’t think the math models motivation anyway. It’s abstracted away, leaving each agent maximising a utility function. Neither is utility in the model (which is well defined) isomorphic to utility for a person making decisions in the real world (which is not). But our minds seem to learn things better when they are couched in terms of a story about people.
Hmm. Possibly one danger in this is assuming that your own internal story about what the equations mean is what they actually mean, such that you end up overconfident that the results of a decision in the real world will be like the story in your head.