Is it reasonable to take this as evidence that we shouldn’t use expected utility computations, or not only expected utility computations, to guide our decisions?
If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we should revise the original hypothesis.
Utility as a theoretical construct is charming, but it does have to pay its way, just like anything else.
P.S. I think the reasoning from “bounded rationality exists” to “non-Bayesian mind changes exist” is good stuff. Perhaps we could call this “on seeing this, I become willing to revise my model” phenomenon something like “surprise”, and distinguish it from merely new information.
Is it reasonable to take this as evidence that we shouldn’t use expected utility computations, or not only expected utility computations, to guide our decisions?
If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we should revise the original hypothesis.
Utility as a theoretical construct is charming, but it does have to pay its way, just like anything else.
P.S. I think the reasoning from “bounded rationality exists” to “non-Bayesian mind changes exist” is good stuff. Perhaps we could call this “on seeing this, I become willing to revise my model” phenomenon something like “surprise”, and distinguish it from merely new information.