# Weird Things About Money

# 1. Money wants to be linear, but wants even more to be logarithmic.

People sometimes talk as if risk-aversion (or risk-loving) is irrational

*in itself*. It is true that VNM-rationality implies you just take expected values, and hence, don’t penalize variance or any such thing. However,*you are allowed to have a concave utility function,*such as utility which is logarithmic in money. This creates risk-averse behavior. (You could also have a convex utility function, creating risk-seeking behavior.): if you have risk-averse behavior, other agents can exploit you by selling you insurance. Hence, money flows from risk-averse agents to less risk-averse agents. Similarly, risk-seeking agents can be exploited by charging them for participating in gambles. From this, one might think a market will evolve away from risk aversion(/seeking), as risk-neutral agents accumulate money.**Counterpoint**

People clearly act more like money has diminishing utility, rather than linear utility. So revealed preferences would appear to favor risk-aversion. Furthermore, it’s clear that the amount of pleasure one person can get per dollar diminishes as we give that person more and more money.

On the other hand, that being the case, we can easily purchase a lot of pleasure by giving money to others with less. So from a more altruistic perspective, utility does not diminish nearly so rapidly.

Rationality arguments of the Dutch-book and money-pump variety require an assumption that “money” exists. This “money” acts very much like utility, suggesting that utility is supposed to be linear in money. Dutch-book arguments assume from the start that agents are willing to make bets if the expected value of those bets is nonnegative. Money-pump arguments, on the other hand, can

*establish*this from other assumptions.Stuart Armstrong summarizes the money-pump arguments in favor of applying the VNM axioms directly to real money. This would imply risk-neutrality and utility linear in money.

On the other hand, the Kelly criterion implies betting as if utility were

*logarithmic*in money.The Kelly criterion is not derived via Bayesian rationality, but rather, an asymptotic argument about average-case performance (which is kinda frequentist). So initially it seems this is no contradiction.

However, it is a theorem that a diverse market would come to be dominated by Kelly bettors, as Kelly betting maximizes long-term growth rate. This means the previous counterpoint was wrong: expected-money bettors profit

*in expectation*from selling insurance to Kelly bettors, but the Kelly bettors eventually dominate the market.Expected-money bettors continue to have the most money

*in expectation*, but this high expectation comes from increasingly improbable strings of wins. So you might see an expected-money bettor initially get a lot of money from a string of luck, but eventually burn out.(For example, suppose an investment opportunity triples money 50% of the time, and loses it all the other 50% of the time. An expected money bettor will go all-in, while a Kelly bettor will invest some money but hold some aside. The expected-money betting strategy has the highest expected value, but will almost surely be out in a few rounds.)

The kelly criterion still implies near-linearity for small quantities of money.

Moreover, the more money you have, the closer to linearity—so the larger the quantity of money you’ll treat as an expected-money-maximizer would.

This vindicates, to a limited extent, the idea that a market will approach linearity—Kelly bettors will act more and more like expected-money maximizers as they accumulate money.

As argued before, we get agents with a large bankroll (and so, with behavior closer to linear) selling insurance to Kelly agents with smaller bankroll (and hence more risk-averse), and profiting from doing so.

But everyone is still Kelly in this picture, making logarithmic utility the correct view.

So the money-pump arguments seem to

*almost*pin us down to maximum-expectation reasoning about money, but*actually*leave enough wiggle room for logarithmic value.If money-pump arguments for expectation-maximization doesn’t apply in practice

*to money,*why should we expect it to apply elsewhere?Kelly betting is fully compatible with expected utility maximization, since we can maximize the expectation of the logarithm of money. But if the money-pump arguments are our reason for buying into the expectation-maximization picture in the first place, then their failure to apply to money should make us ask: why would they apply to utility any better?

**Candidate answer:**utility is*defined as*the quantity those arguments work for. Kelly-betting preferences on money don’t actually violate any of the VNM axioms. Because the VNM axioms hold, we can re-scale money to get utility. That’s what the VNM axioms give us.The VNM axioms only rule out

*extreme*risk-aversion or risk-seeking where a gamble between A and B is outside of the range of values from A to B. Risk aversion is just fine if we can understand it as a re-scaling.So any kind of re-scaled expectation maximization, such as maximization of the log, should be seen as a

*success*of VNM-like reasoning, not a failure.Furthermore, thanks to continuity, any such re-scaling will closely resemble linear expectation maximization when small quantities are involved. Any convex (risk-averse) re-scaling will resemble linear expectation more as the background numbers (to which we compare gains and losses) become larger.

It still seems important to note again, however, that the usual justification for Kelly betting is “not very Bayesian” (very different from subjective preference theories such as VNM, and heavily reliant on long-run frequency arguments).

# 2. Money wants to go negative, but can’t.

Money can’t go negative. Well, it can, just a little: we do have a concept of debt. But if the economy were a computer program, debt would seem like a big hack. There’s no absolute guarantee that debt can be collected. There are a lot of incentives in place to help ensure debt can be collected, but ultimately, bankruptcy or death or disappearance can make a debt uncollectible. This means money is in this weird place where we sort of act like it can go negative for a lot of purposes, but it also sort of can’t.

This is especially weird if we think of money

*as*debt, as is the case for gold-standard currencies and similar: money is an IOU issued by the government, which can be repaid upon request.Any kind of money is ultimately based on some kind of

*trust.*This can include trust in financial institutions, trust that gold will still be desirable later, trust in cryptographic algorithms, and so on. But thinking about debt emphasizes that a lot of this trust is*trust in people*.

Money can have a scarcity problem.

This is one of the weirdest things about money. You might expect that if there were “too little money” the value of money would simply re-adjust, so long as you can subdivide further and the vast majority of people have a nonzero amount. But this is not the case. We can be in a situation where “no one has enough money”—the great depression was a time when there were too few jobs and too much work left undone. Not enough money to buy the essentials. Too many essentials left unsold. No savings to turn into loans. No loans to create new businesses. And all this,

*not*because of any change in the underlying physical resources. Seemingly, economics itself broke down: the supply was there, the demand was there, but the supply and demand curves could not meet.(I am not really trained in economics, nor a historian, so my summary of the great depression could be mistaken or misleading.)

My loose understanding of monetary policy suggests that scarcity is a concern even in normal times.

The scarcity problem would not exist if money could be reliably manufactured through debt.

I’m not really sure of this statement.

When I visualize a scarcity of money, it’s like there’s both work needing done and people needing work, but there’s not enough money to pay them. Easy manufacturing of money through debt should allow people to pay other people to do work.

OTOH, if it’s

*too*easy to go negative, then the concept of money doesn’t make sense any more: spending money doesn’t decrease your buying power any more if you can just keep going into debt. So everyone should just spend like crazy.Note that this isn’t a problem in theoretical settings where money is equated with utility (IE, when we assume utility is linear in money), because

*money is being inherently valued*in those settings, rather than valued instrumentally for what it can get. This assumption is a convenient approximation, but we can see here that it radically falls apart for questions of negative bankroll—it seems easy to handle (infinitely) negative money if we act like it has intrinsic (terminal) value, but it all falls apart if we see its value as extrinsic (instrumental).

So it seems like we want to facilitate negative bank accounts “as much as possible, but not too much”?

Note that Dutch-book and money-pump arguments tend to implicitly assume an infinite bankroll, ie, money which can go negative as much as it wants. Otherwise you don’t know whether the agent has enough to participate in the proposed transaction.

Kelly betting, on the other hand, assumes a finite bankroll—and indeed, might have to be abandoned or adjusted to handle negative money.

I believe many mechanism-design ideas also rely on an infinite bankroll.

- Generalize Kelly to Account for # Iterations? by 2 Nov 2020 16:36 UTC; 24 points) (
- 22 Feb 2021 21:00 UTC; 3 points) 's comment on Calculating Kelly by (

It’s true that diminishing marginal utility can produce some degree of risk-aversion. But there’s good reason to think that no plausible utility function can produce the risk-aversion we actually see—there are theorems along the lines of “if your utility function makes you prefer X to Y then you must also prefer A to B” where pretty much everyone prefers X to Y and pretty much no one prefers A to B.

[EDITED to add:] Ah, found the specific paper I had in mind: “Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion” by Matthew Rabin. An example from the paper: if you always turn down a

^{50}⁄_{50}bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a^{50}⁄_{50}bet where you could either lose $1000 or gainall the money in the world. (However much money there is in the world.)I didn’t believe that claim, so I looked at the paper. The key piece is that you must

alwaysturn down the^{50}⁄_{50}lose 10/gain 10.10 bet, no matter how much wealth you have—i.e. even if you had millions or billions of dollars, you’d still turn down the small bet. Considering that assumption, I think the real-world applicability is somewhat more limited than the paper’s abstract seems to indicate.That said, there are multiple independent lines of evidence in various contexts suggesting that humans’ degree of risk-aversion is too strong to be accounted for by diminishing marginals alone, so I do still think that’s true.

The paper has some more sophisticated examples that make less stringent assumptions. Here are a couple. “Suppose, for instance, we know a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than (say) $350,000, but know nothing about her utility function for wealth levels above $350,000, except that it is not convex. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670. If we only know that a person turns down lose $100/gain $125 bets when her lifetime wealth is below $100,000, we also know she will turn down a 50-50 lose $600/gain $36 billion bet beginning from a lifetime wealth of $90,000.”

Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.

The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.

Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can’t exploit such fixed costs to money pump someone.

Yup, I agree with all that, and I think it is one of the reasons for (at least some instances of) loss aversion. I wonder whether there have been attempts to probe loss aversion in ways that get around this issue, maybe by asking subjects to compare scenarios that somehow both have the same overheads

Possibly relevant in the context of Kelly betting/ maximazing log wealth.

Is the idea supposed to be that humans always turn down such a bet?

The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation

isthe shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune.(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the “overhead” of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)

But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.

(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen

companyis a system for doing so, and a millionaire might own one.)If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.

Also see my reply to koreindian.

My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet.

I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but

that’s a different bet.Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.

OK.

But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.

Unless the point is that they’re

so smallthat it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.

I think this is mixing up two things. First, a diminishing marginal utility in

consumptionmeasured in money. This can lead to risk averse behaviour, but it could be any sublinear function, not just logarithmic, and I have seen no reason to think it’s logarithmic in actually existing humans.I wouldn’t call it “exploit”. It’s not a money pump that can be repeated arbitrarily often, its simply a price you pay for stability.

Only the utility of the agent in question is supposed to be linear in this “money”, and that can always be achieved by a monotone transformation. This is quite different from suggesting there’s a resource

everyoneshould be linear in under the same scaling.The second thing is the Kelly criterion. The Kelly criterion exist because money can compound. This is also why

itproduces specifically a logarithmic stucture. Kelly theory recommends you to use the criterion regardsless of the shape of your utility in consumption, if you expect many more games after this one—it is much more like a convergent instrumental goal. So this:is just wrong AFAICT. This is compatible from the side of utility maximization, but not from the side of

Kelly as theory. Of course you can always construct a utility function that willbehavein a specific way—this isn’t saying much.Depends on how you define “dominate the market”. In most worlds, most (by headcount) of the bettors still around will be Kelly bettors. I even think that weighing by money, in most worlds Kelly bettors would outweigh expectation maximizers. But weighing by money across all worlds, the expectation maximizers win—by definition. The Kelly criterion “almost surely” beats any other strategy when played sufficiently long—but it only wins by some amount in the cases where it wins, and its infinitely behind in the infinitely unlikely case that it doesn’t win.

Kelly betting really is incompatible with expectation maximization. It deliberately takes a lower average. The conflict is essentially over two conflicting infinities: Kelly notes that for any sample size, if theres a long enough duration Kelly wins. And maximization notes that for any duration, if theres a big enough sample size maximisation wins.

A lot of what you say here goes into monetary economics, and you should ask someone in the field or at least read up on it before relying on any of this. Propably you shouldn’t rely on it even then, if at all avoidable.

I agree that (1) I’m just constructing a utility function that results in the Kelly behavior, and (2) there’s still a conceptual incompatibility between the classic argument for Kelly and EV theory. But I still think it’s important to point out that the behavioral recommendations of Kelly do not violate the VNM axioms in any way, so the incompatibility is not as great as it may seem. This is important because it would be nice to reconcile the two philosophies, forging a new philosophy which is more robust than either.

Right.

And yet it doesn’t violate VNM, which means the classic argument for maximizing expected utility goes through. How can this paradox be resolved? By noting that utility is just whatever quantity expectation maximization

doesgo through for, “by definition” (as I said in the post).Right, agreed.

I’m curious if you’re taking a side, here, wrt which limit one should take.

To the extent that it’s Kelly vs VNM, Kelly seems more practical (applying to real betting), while VNM provides a much more general theory of decision making (since money (or another compounding good) does not need to be present in order for VNM to be relevant).

I think the interesting question is what to do when you expect many more, but only finitely many rounds. It seems like Kelly should somehow gradually transition, until it recommends normal utility maximization in the case of only a single round happening ever. Log utility doesn’t do this. I’m not sure I have anything that does though, so maybe it’s unfair to ask it from you, but still it seems like a core part of the idea, that the Kelly strategy comes from the

compounding, is lost.This is the sort of argument you want to be very suspicious of if youre confused, as I suspect we are. For example, you can now just apply all the arguments that made Kelly seem compelling again, but this time with respect to the new, logarithmic utility function. Do they actually seem less compelling now? A little bit, yes, because I think we really are sublinear in money, and the intuitions related to that went away. But no matter what the utility function, we can always construct bets that are compounding

in utility, and then bettors which are Kelly with respect to that utility function will come to dominate the market. So if you do this reverse-inference of utility, the utility function of Kelly bettors will seems to change based on the bets offered.Not really, I think we’re to confused to say yet. I do think I understand decisions with bounded utility (all the classical foundations imply bounded utilities, including VNM. This doesn’t seem to be well known here). Bounded utility makes maximization a lot more Kelly: it means that the maximizers can no longer have the arbitrarily high pay-offs that are needed to balance the near-certainty of elimination. I also think it should make it not matter which limit you take first, but I don’t think that leads to Kelly, either, because the betting structure that leads to Kelly assumes unbounded utility. Perhaps it would end up as a local approximation somewhere.

Now I also think that bounded decision theory is inadequate. I think a decision theory should be able to implement a paperclip maximizer, and it should work in worlds that last infinitely long. But I don’t have something that fulfills that. I think theres a good chance the solution doesn’t look like utility at all: A theorem that needs its problem to be finite propably won’t do well in embedded problems.

Ah, I see, interesting.

Yeah, I agree with this.

Yeah. I’m generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows). So while I agree that boundedness should be thought of as part of the classical notion of real-valued utility, this doesn’t seem like a huge deal to me.

OTOH, logical uncertainty / radical probabilism introduce new reasons to require boundedness for expectations. What is the expectation of the self-referential quantity “one greater than your expectation for this value”? This seems problematic even with hyperreals/surreals. And we could embed such a quantity into a decision problem.

Have you worked this out somewhere? I’d be interested to see it but I think there are some divergences it can’t adress. There is for one the Pasadena paradox, which is also a divergent sum but one which doesn’t stably lead anywhere, not even to infinity. The second is an apparently circular dominance relation: Imagine you are linear in monetary consumption. You start with 1$ which you can either spend or leave in the bank, which doubles it every year even after accounting for your time preference/uncertainty/other finite discounting. Now for every n, leaving it in the bank for n+1 years dominates leaving it for n years, but leaving it in the bank forever gets 0 utility. Note that if we replace money with energy here, this could actually happen in universes not too different from ours.

What is the expectation of the self-referential quantity “one greater than your expectation for this value,

except when that would go over the maximum, in which case it’s one lower than expectation instead”? Insofar as there is an answer it would have to be “one less than maximum”, but that would seem to require uncertainty about what your expectations are.It’s a bit of a mess due to some formatting changes porting to LW 2.0, but here it is.

I’ve gotten the impression over the years that there are a lot of different ways to arrive at the same conclusion, although I unfortunately don’t have all my sources lined up in one place.

I think if you just drop continuity from VNM you get this kind of picture, because the VNM continuity assumption corresponds to the Archimedian assumption for the reals.

I think there’s a variant of Cox’s theorem which similarly yields hyperreal/surreal probabilities (infinitesimals, not infinities, in that case).

If you want to condition on probability zero events, you might do so by rejecting the ratio formula for conditional probabilities, and instead giving a basic axiomatization of conditional probability in its own right. It turns out that, at least under one such axiom system, this is equivalent to allowing infinitesimal probability and keeping the ratio definition of conditional probability.

(Sorry for not having the sources at the ready.)

Here’s how it works. I have to assign expectations to gambles. I have some consistency requirements in how I do this; for example, if you modify a gamble g by making a probability p outcome have v less value, then I must think the new gamble g′ is worth p⋅v less. However, how I assign value to divergent sums is subjective—it cannot be determined precisely from how I assign value to each of the elements of the sum, because I’m not trying to assume anything like countable additivity.

In a case like the St Petersburg Lottery, I believe I’m required to have some infinite expectation. But it’s up to me what it is, since there’s no one way to assign expectations in infinite hyperreal/surreal sums.

In a case like the Pasadena paradox, though, I’m thinking I’ll be subjectively allowed to assign any expectation whatsoever—so long as all my other infinite-sum expectations are consistent with the assignment.

Perhaps you can try to problematize this example for me given what I’ve written above—not sure if I’ve already addressed your essential worry here or not.

Yes, uncertainty about your own expectations is where this takes us. But that seems quite reasonable, especially because we only need a very small amount of uncertainty, as is illustrated in this example.

This implies that you believe in the existence of countably infinite bets but not countably infinite dutch booking processes. Thats seems like a strange/unphysical position to be in—if that were the best treatment of infinity possible, I think infinity is better abandoned. Im not even sure the framework in your linked post can really be said to contain infinte bets: the only way a bet ever gets evaluated is in a bookie strategy, and no single bookie strategy can be guaranteed to fully evaluate an infinte bet. Is there a single bookie strategy that differentiates the St. Petersburg bet from any finite bet? Because if no, then the agent at least cant distinguish them, which is very close to not existing at all here.

Why? I haven’t found any finite dutch books against not doing so.

I dont think you have. That example doesn’t involve any uncertainty or infinite sums. The problem is that for any finite n, waiting n+1 is better than waiting n, but waiting indefinitely is worse than any. Formally, the problem is that I have a complete and transitive preference between actions, but no unique best action, just a series that keeps getting better.

Note that you talk about something related in your linked post:

But the proof for that reduction only goes one way: for any preference relation on sets, theres a binary one. My problem is that the inverse does not hold.

I haven’t seen the theorem, so correct me if I’m wrong, but I’d guess it says that for any

fixed number of bettors, there exists a time at which the Kelly bettors dominate the market with arbitrary probability. (Alternate phrasing: a market with a finite number of bettors would be dominated by Kelly bettors over infinite time.) But if we flip it around, we can also say that for anyfixed time-horizon, there exists a number of bettors such that the EV-maximizers dominate the market throughout that time with arbitrary probability. (Alternate phrasing: a market with an infinite number of bettors would be dominated by EV-maximizers for any finite time.)I don’t see why we should necessarily prefer the first ordering of the quantifiers over the second.

The number of bettors isn’t the relevant parameter here. The relevant parameter is what fraction of the bettors are Kelly vs EV. However you set it up, the fraction of money in the hands of EV bettors will decrease over long time periods with high probability. If we have some fixed time-horizon, as long as that time horizon is fairly long, EV-maximizers will only dominate the market throughout that time with high probability if the market is essentially all EV-maximizers at the beginning.

An analogy: if one species has higher reproductive fitness than another, will that species eventually dominate? The math for Kelly betting is identical to the usual setup for natural selection models.

The point with having a large number of bettors is to assume that they all get independent sources of randomness, so at least some will win all their bets. Handwavy math follows:

Assume that we have n EV bettors and n Kelly bettors (each starting with $1), and that they’re presented with a string of bets with 0.75 probability of doubling any money they risk. The EV bettors will bet everything at each time-step, while the Kelly bettors will bet half at each time-step. For any timestep t, there will be an n such that approximately 0.75t of EV bettors have won all their bets (by the law of large numbers), for a total earning of 0.75t2tn=1.5tn. Meanwhile, each Kelly bettor will in expectation multiply their earnings by 1.25 each time-step, and so in expectation have 1.25t after t timesteps. By the law of large numbers, for a sufficiently large n they will in aggregate have approximately 1.25tn. Since 1.5tn>1.25tn, the EV-maximizers will have more money, and we can get an arbitrarily high probability with an arbitrarily large n.

Ah, I see. The usual derivation of the Kelly criterion explicitly assumes that there is a specific sequence of events on which people are betting (e.g. stock market movements or horse-race outcomes); the players do not get to all bet separately on independent sources of randomness. If they could do that, then it would change the setup completely—it opens the door to agents making profits by trading with each other (in order to diversify their portfolios via risk-trades with other agents). Generally speaking, with idealized agents in economic equilibrium, they should all trade risk until they all effectively have access to the same randomness sources.

Another way to think about it: compare the performance of counterfactual Kelly and EV agents on the same opportunities. In other words, suppose I look at my historical stock picks and ask how I would have performed had I been a Kelly bettor or an EV bettor. With probability approaching 1 over time, Kelly betting will seem like a better idea than EV betting in hindsight.

Thanks, that way to derive it makes sense! The point about free trade also seems right. With free trade, EV bettors will buy all risk from Kelly bettors until the former is gone with high probabiliity.

So my point only applies to bettors that can’t trade. Basically, in almost every market, the majority of resources are controlled by Kelly bettors; but across all markets in the multiverse, the majority of resources are controlled by EV bettors, because they make bets such that they dominate the markets which contain most of the multiverse’s resources.

(Or if there’s no sufficiently large multiverse, Kelly bettors will dominate with arbitrary probability; but EV bettors will (tautologically) still get the most expected money.)

I can see how mathematicians would dislike an entity that lacks absolute guarantees, but it seems like a quite normal attribute to encounter in the real world.

That’s mostly accurate, but it leaves out an important step in the causal chain: the “too little money” meant that the wages which workers were accustomed to getting became too high. For reasons that are likely related to bargaining strategies, workers wouldn’t accept (or sometimes weren’t allowed to accept) wages that gave them fewer dollars, even when those fewer dollars bought them more goods than they were accustomed to.

In other words, there’s a path for the value of money to re-adjust, but there’s enough opposition to it that most economists have given up on it.

I’m unclear what “facilitate” is doing here. “Negative bank accounts” is one way to describe a solution, but deflation meant that pretty much everyone preferred a positive bank account to “borrow and invest”.

Central banks know how to manufacture money. The main problems are figuring out the right amounts, and ensuring that central banks create those amounts.

I think the main point here is that debt has different quality then ‘normal money’. Debt doesn’t exist in M0 and only exist for other forms of money then M0. Going from M0 to M1 and M2 is the hack that allows for negative money.

Can you say more about this? Stuart’s arguments weren’t that convincing to me, absent other assumptions. In particular, it seems like the existence of a contract that exactly cancels out your own contract could increase the value of your own contract; and that there’s no guarantee that such a contract exists (or can be made without exposing anyone else to risk that they don’t want). Stuart seems to acknowledge this in other parts of the comments, instead referring to the possibility of aggregation.

From this, I’m guessing that you need to assume that the risk is almost independent of the total market value (e.g. because it’s small in comparison with the total market value, and independent of all other sources of risk), and there exists an arbitrarily large number of traders whose utility is linear in small amounts of money (that you can spread out the risk between). Are these the necessary assumptions to establish linearity of utility in money?

I basically think you’re right, those arguments are weak, but this post was about me reasoning out some of the details for myself.

You make a good point about independent risk. I had only half-noticed that point when thinking about this.

I think this (especially the second part) is missing a fundamental aspect of … well, not just money, but decision-making. It’s about expectations and projections into the future, not about the current definition or valuation.

Debt is no more a hack than is money itself. Neither actually exist, they are simply contingent future values. Zero is not special in this world.

Here is Ole Peters: [Puzzle] “Voluntary insurance contracts constitute a puzzle because they increase the expectation value of one party’s wealth, whereas both parties must sign for such contracts to exist [Answer]: Time averages and expectation values differ because wealth changes are non-ergodic.”

Peters again: “Conceptually, its power derives from a new notion of rationality. Many reasonable models of wealth are non-stationary processes. Observables representing wealth then do not have the ergodic property of Section I, and therefore rationality must not be defined as maximizing expectation values of wealth. Rather, we propose as a null model to define rationality as maximizing the time-average growth of wealth.”

You write: “Kelly betting, on the other hand, assumes a finite bankroll—and indeed, might have to be abandoned or adjusted to handle negative money.” [Negative Interest rate?] Can you explain more? Would love to fit this conceptually into Peter’s Non-ergodic growth rate theory

I really like this. I read part 1 as being about the way the economy or society implicitly imposes additional pressures on individuals’ utility functions. Can you provide a reference for the theorem that Kelly betters predominate?

EtA: an observation: the arguments for expected value also assume infinite value is possible, which (module infinite ethics style concerns, a significant caveat...) also isn’t realistic.