I’m missing something. Suppose that my preferences are strictly transitive, but that they violate the other axioms and that there are lots of trades which I view as incomparable (none of AB holds), and that I won’t make an incomparable trade. Why would this leave me vulnerable to being money pumped?
It wouldn’t (at least not to a strong money pump).
But decreeing that things are incomparable is often rather dumb; if your house was flooding, would you grab certain things, or would you just refuse to choose anything, because the choices are “incomparable”?
Thanks, I was wondering if all of the axioms were crucial, or mostly the transitivity one.
Perhaps “incomparable” is the wrong approximation. Perhaps a better way to view it is that I view transactions as having frictional costs (if nothing else, the cost of working out to sufficient precision what my actual preferences are). There are a lot of (A, B) pairs such that, if I had A and was offered B in exchange, I would turn down the offer, and the same if I had B and was offered A.. Very roughly, assume that I treat each exchange transaction as having some probability of going wrong in some way (e.g. failing in such a way that I wind up with neither object), so the new object’s utility has to be say 10% higher than the old object’s utility to offset the transaction risk.
Would this model leave me vulnerable to being money pumped?
Your model is safe from being money pumped by another agent. The disadvantage is that you’ll pass up some certain gains, which is equivalent (modulo loss aversion) to taking on some certain losses. But if you really do think that there is a nonnegligible probability that any given exchange will go bad, then you don’t have to violate any of the preference axioms, all your caution is in the probability estimate.
Would this model leave me vulnerable to being money pumped?
In a certain sense, it does (as long as your 10% beliefs are inaccurate). If you have a lottery A that gives you negative value, I can trade it for a lottery B that is slightly more negative (you 10% chance of getting neither will make you accept this deal). And then itterate.
Or, what about me selling you (in a single transaction) an insurance, good for a hundred trades, that guarantees you against the 10% loss chance?
Generally, inaccurate beliefs leave you open to some sort of arbitrage, even if it’s not technically a money pump as described above.
I’m missing something. Suppose that my preferences are strictly transitive, but that they violate the other axioms and that there are lots of trades which I view as incomparable (none of AB holds), and that I won’t make an incomparable trade. Why would this leave me vulnerable to being money pumped?
It wouldn’t (at least not to a strong money pump).
But decreeing that things are incomparable is often rather dumb; if your house was flooding, would you grab certain things, or would you just refuse to choose anything, because the choices are “incomparable”?
Thanks, I was wondering if all of the axioms were crucial, or mostly the transitivity one.
Perhaps “incomparable” is the wrong approximation. Perhaps a better way to view it is that I view transactions as having frictional costs (if nothing else, the cost of working out to sufficient precision what my actual preferences are). There are a lot of (A, B) pairs such that, if I had A and was offered B in exchange, I would turn down the offer, and the same if I had B and was offered A.. Very roughly, assume that I treat each exchange transaction as having some probability of going wrong in some way (e.g. failing in such a way that I wind up with neither object), so the new object’s utility has to be say 10% higher than the old object’s utility to offset the transaction risk.
Would this model leave me vulnerable to being money pumped?
Your model is safe from being money pumped by another agent. The disadvantage is that you’ll pass up some certain gains, which is equivalent (modulo loss aversion) to taking on some certain losses. But if you really do think that there is a nonnegligible probability that any given exchange will go bad, then you don’t have to violate any of the preference axioms, all your caution is in the probability estimate.
In a certain sense, it does (as long as your 10% beliefs are inaccurate). If you have a lottery A that gives you negative value, I can trade it for a lottery B that is slightly more negative (you 10% chance of getting neither will make you accept this deal). And then itterate.
Or, what about me selling you (in a single transaction) an insurance, good for a hundred trades, that guarantees you against the 10% loss chance?
Generally, inaccurate beliefs leave you open to some sort of arbitrage, even if it’s not technically a money pump as described above.