Deciding what to think about; is it worthwhile to have universal utility function?

The hunter-gatherer example in the Is risk aversion really irrational? got me thinking about the real world issues with ‘maximizing utility’ and any other simple rule approach to decision making.

The elephant in the room is that universal, effective utility of anything could be very expensive to calculate if you employ any foresight (consider thinking several moves ahead). And once you start estimating utility in different ways depending to the domain, the agent’s behaviour stops being consistent with plain utility maximization. At same time, the solution space of the problems is often very big, meaning that you have immense number of potential choices and you need to perform a lot of utility estimations really quickly to pick the best solution. Think of Chess or Go. The computing time could be better spent elsewhere.

The hunter-gatherer in example can think about the traps and other hunting tools and invent a new one, instead of trying to figure out probability theory or something of this kind.

Inventing a new trap is a case where the number of potential decisions is extremely huge.

I faintly recall a fiction story I’ve read where a smart boy becomes tribe leader—by inventing a better bear trap, not so much by being utterly rational at correctly processing small differences in expected utility when it comes to bets.

He can also think more about berries and look if there’s evidence that other mammals are eating those berries; some plant that is not poisonous to other mammal is very unlikely to hurt a human; some plant that is not eaten by other mammals is very likely to be poisonous to humans as well. He can even feed the berries to some mammal he’d keep alive (i’d imagine keeping animals alive was a fairly straightforward approach to meat preservation).

At the same time, even if that hunter gatherer knew enough math to try to formally calculate his odds, the probabilities are unknown. Indeed there are probability distributions for different degrees of getting sick of berries or not (and different symptoms of sickness), et cetera. We today are just beginning to think how to improve his odds using formal mathematics, and we’re still not sure how to accomplish that, and it is clear that it is going to be very computationally intensive.

As a singular example, I can easily come up with good solutions for that hunter-gatherer by looking into the big solution space that he would have (he’s living in the real world), but it is much harder and much more tedious for me to calculate his odds even in a very simplified example where probabilities of getting sick or winning a duel are exact, and the ‘sick or not sick’ is a binary outcome. That’s with me having a computer at my fingertips, and knowledge of mathematics tens thousands years down the road from hunter gatherer!

Bottom line, it would be very suboptimal for the intelligent hunter gatherer to try to use his intelligence in this particular expected-utility-calculating way to slightly optimize his behaviour (keep in mind that he has no way of estimating probabilities), when lesser amount of good thought would allow him to invent something extremely useful and gain the status.

As a personal success story—I have developed and successfully published a computer game, and made good income on it. The effort that can be spent on decision making—on choosing to implement A or B, is always tightly capped by the other ways of applying effort that would pay off more (implementing both A and B, or searching the solution space more in the hope of coming up with C). It is very rare that putting effort into very careful choice between very few options is the best use of intelligence. It is common in thought experiments but its rare in reality.