Entirely agree. Humans, for example, are not remotely VNM coherent.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
Thank you, and no offense taken.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.