Sorry, can you explain why it also applies to investing? For reference, here’s an expanded version of the argument.
Say you have decided to donate $500 to charity A and $500 to charity B. Then you learn that someone else has decided to reallocate their $500 from charity A to charity B. If you’re a consequentialist and have preferences over the total donations to each charity, rather than the warm fuzzies you get from splitting your own donations 50⁄50, you will reallocate $500 to charity A. Note that the conclusion doesn’t depend on your risk aversion, only on the fact that you considered your original decision optimal before you learned the new info. That means your original decision for the 50⁄50 split relied on an implausible coincidence and was very sensitive to other people’s reallocations in both directions, so in most cases you should allocate all your money to one charity, as long as your donations aren’t too large compared to the donations of other people.
Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000”
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
I think the argument still goes through, as long as the number of people is large enough
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
The key difference is risk aversion. People are (quite rightly in my opinion) very risk-averse with their own money, almost nobody would be happy to trade all their possessions for a 51% shot at twice as much, mostly because doubling your possessions doesn’t improve your life as much as losing them worsens it.
On the other hand, with altruistic causes, helping two people really does do exactly twice as much good as helping one, so there is no reason to be risk averse, and you should put all your resources on the bet with the highest expected pay-off, regardless of the possibility that it might all amount to nothing if you don’t get lucky.
Right, and there is risk in everything. A charity might fold, or end up consisting of crooks, or its cause might actually be harmful, or the estimate of buying malaria nets being more useful than supporting SI might turn out to be wrong. Hence the diversification.
Politics is even worse, you can never be sure which policy is better, and whether, when carried to its extreme, it turns out to be harmful.
This is where cousin_it’s naive argument for radicalization falls flat.
This doesn’t matter. Whether you should be risk averse doesn’t depend on how much risk there is, whether you should be risk averse depends on whether your pay-offs suffer diminishing returns, it is a mathematical equivalence (if your pay-offs have accelerating returns, you should be risk-seeking).
I think you don’t understand risk aversion. Consider a simple toy problem, investment A has a 90% chance of doubling your money and a 10% chance of losing all of it, investment A has a 90% chance of multiplying your money by one half and a 10% chance of losing all of it. Suppose you have $100,00, enough for a comfortable lifestyle. If you invest it all in A, you have a 90% chance of a much more comfortable lifestyle, but a 10% chance of being out on the street, which is pretty bad. Investing equal amounts in both reduces your average welath from $180,000 to $157,500, but increases your chance of having enough money to live from 90% to 99%, which is more important.
If they are instead charities, and we substitute $1,000 return for 1 life saved, then diversifying just reduces the number of people you save, it also increases your chance of saving someone, but this doesn’t really matter compared to saving more people in the average case.
Look at it this way, in personal wealth, the difference between some money and no money is huge, the difference between some money and twice as much money is vastly less significant. In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
I’m not explaining this very well, because I’m a crap explainer, here’s the relevant wikipedia page
In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment. Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well. Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
I’m not talking about the politics case, there are other problems with Cousin It’s argument. I’m arguing with your ‘refutation’ of the non-diversifying principle.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment.
They may not be in danger of homelessness, but there is still diminishing returns. The difference between $1m and $2m is more important than the difference between $2m and $3m. Notice the operative word ‘large’ in your sentence. If those guys were just betting for amounts on the scale of $10, sufficiently small that the curve becomes basically linear, then they wouldn’t diversify (if they were smart).
The situation with charity is somewhat similar, your donation is as small on the scale of the whole problem being fixed, and the whole amount being donated, as $10 is for a rich investment banker. The diminshing returns that exist do not have any effect on the scale of individuals.
Politics, if you insist on talking about it, is the same. Your personal influence has no effect on the marginal utilities, it is far too small.
Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well.
Yes, if you donate to make yourself feel good (as opposed to helping people) and having all your money go to waste makes you feel exceptionally bad, then you should diversify. If you donate to help people, then you shouldn’t assign an exceptionally large disutility to non-optimal-charity, you should assign utility precisely proportional to the number of lives you can save.
If this argument was universal, it would be rational to invest in a single stock and the saying about all eggs in one basket would not exist.
Sorry, can you explain why it also applies to investing? For reference, here’s an expanded version of the argument.
Say you have decided to donate $500 to charity A and $500 to charity B. Then you learn that someone else has decided to reallocate their $500 from charity A to charity B. If you’re a consequentialist and have preferences over the total donations to each charity, rather than the warm fuzzies you get from splitting your own donations 50⁄50, you will reallocate $500 to charity A. Note that the conclusion doesn’t depend on your risk aversion, only on the fact that you considered your original decision optimal before you learned the new info. That means your original decision for the 50⁄50 split relied on an implausible coincidence and was very sensitive to other people’s reallocations in both directions, so in most cases you should allocate all your money to one charity, as long as your donations aren’t too large compared to the donations of other people.
Sorry, I must be misunderstanding the argument. Why would you shift your donations from B to A if someone else donates to B?
Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
The original argument also relied on the number of people being large enough, I think.
The key difference is risk aversion. People are (quite rightly in my opinion) very risk-averse with their own money, almost nobody would be happy to trade all their possessions for a 51% shot at twice as much, mostly because doubling your possessions doesn’t improve your life as much as losing them worsens it.
On the other hand, with altruistic causes, helping two people really does do exactly twice as much good as helping one, so there is no reason to be risk averse, and you should put all your resources on the bet with the highest expected pay-off, regardless of the possibility that it might all amount to nothing if you don’t get lucky.
Right, and there is risk in everything. A charity might fold, or end up consisting of crooks, or its cause might actually be harmful, or the estimate of buying malaria nets being more useful than supporting SI might turn out to be wrong. Hence the diversification.
Politics is even worse, you can never be sure which policy is better, and whether, when carried to its extreme, it turns out to be harmful.
This is where cousin_it’s naive argument for radicalization falls flat.
This doesn’t matter. Whether you should be risk averse doesn’t depend on how much risk there is, whether you should be risk averse depends on whether your pay-offs suffer diminishing returns, it is a mathematical equivalence (if your pay-offs have accelerating returns, you should be risk-seeking).
I think you don’t understand risk aversion. Consider a simple toy problem, investment A has a 90% chance of doubling your money and a 10% chance of losing all of it, investment A has a 90% chance of multiplying your money by one half and a 10% chance of losing all of it. Suppose you have $100,00, enough for a comfortable lifestyle. If you invest it all in A, you have a 90% chance of a much more comfortable lifestyle, but a 10% chance of being out on the street, which is pretty bad. Investing equal amounts in both reduces your average welath from $180,000 to $157,500, but increases your chance of having enough money to live from 90% to 99%, which is more important.
If they are instead charities, and we substitute $1,000 return for 1 life saved, then diversifying just reduces the number of people you save, it also increases your chance of saving someone, but this doesn’t really matter compared to saving more people in the average case.
Look at it this way, in personal wealth, the difference between some money and no money is huge, the difference between some money and twice as much money is vastly less significant. In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
I’m not explaining this very well, because I’m a crap explainer, here’s the relevant wikipedia page
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment. Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well. Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
I’m not talking about the politics case, there are other problems with Cousin It’s argument. I’m arguing with your ‘refutation’ of the non-diversifying principle.
They may not be in danger of homelessness, but there is still diminishing returns. The difference between $1m and $2m is more important than the difference between $2m and $3m. Notice the operative word ‘large’ in your sentence. If those guys were just betting for amounts on the scale of $10, sufficiently small that the curve becomes basically linear, then they wouldn’t diversify (if they were smart).
The situation with charity is somewhat similar, your donation is as small on the scale of the whole problem being fixed, and the whole amount being donated, as $10 is for a rich investment banker. The diminshing returns that exist do not have any effect on the scale of individuals.
Politics, if you insist on talking about it, is the same. Your personal influence has no effect on the marginal utilities, it is far too small.
Yes, if you donate to make yourself feel good (as opposed to helping people) and having all your money go to waste makes you feel exceptionally bad, then you should diversify. If you donate to help people, then you shouldn’t assign an exceptionally large disutility to non-optimal-charity, you should assign utility precisely proportional to the number of lives you can save.