I read recently an article on charitable giving which mentioned how people split up their money among many different charities to, as they put it, “maximize the effect”, even though someone with this goal should donate everything to the single highest-utility charity. And this seems a bit like the example you cited where, if blue cards came up randomly 75% of the time and red cards came up 25% of the time, people would bet on blue 75% of the time even though the optimal strategy is blue 100%. All this seems to come from concepts like “Don’t put all your eggs in one basket”, which is a good general rule for things like investing but can easily break down.
I find myself having to fight this rule for a lot of things, and one of them is beliefs. If all of my opinions are Eliezer-ish, I feel like I’m “putting all my eggs in one basket”, and I need to “diversify”.You use book recommendations as a reductio, but I remember reading about half the books on your recommended reading list, thinking “Does reading everything off of one guy’s reading list make me a follower?” and then thinking “Eh, as soon as he stops recommending such good books, I’ll stop reading them.”
The other thing is the Outside View summed up by the proverb “If two people think alike, one of them isn’t thinking.” In the majority of cases I observe where a person conforms to all of the beliefs held by a charismatic leader of a cohesive in-group, and keeps praising that leader’s incredible insight, that person is a sheeple and that leader has a cult (see: religion, Objectivism, various political movements). I respect the Outside View enough that I have trouble replacing it with the Inside View that although I agree with Eliezer about nearly everything and am willing to say arbitrarily good things about him, I’m certainly not a cultist because I’m coming to my opinions based on Independent Logic and Reason. I don’t know any way of solving this problem except the hard way.
“note: Hofstadter does not have a cult”
I tried to start a Hofstadter cult once. The first commandment was “Thou shalt follow the first commandment.” The second commandment was “Thou shalt follow only those even-numbered commandments that do not exhort thee to follow themselves.” I forget the other eight. Needless to say it didn’t catch on.
I tried to start a Hofstadter cult once. The first commandment was “Thou shalt follow the first commandment.” The second commandment was “Thou shalt follow only those even-numbered commandments that do not exhort thee to follow themselves.” I forget the other eight. Needless to say it didn’t catch on.
I find myself having to fight this rule for a lot of things, and one of them is beliefs. If all of my opinions are Eliezer-ish, I feel like I’m “putting all my eggs in one basket”, and I need to “diversify”
You use book recommendations as a reductio, but I remember reading about half the books on your recommended reading list, thinking “Does reading everything off of one guy’s reading list make me a follower?”
I think of all the people who have ever recommended books to me, Eliezer has the most recommendations which I’ve actually followed. In most of my circle socials, I’m the “smart one”, but I’m nowhere near as smart as Eliezer (or most other people on LessWrong, it seems). So I do admire EY a lot. I want to be as smart as he is, and so I try reading all the books he has read.
And it kills me, because I also remember his post about novice editors copying the surface behavior of master editors, without integrating the deep insight, and I know that by reading the same science fiction novels EY has read, I’m committing exactly the same sin. But I don’t know what else I can do to try to improve myself.
how people split up their money among many different charities to, as they put it, “maximize the effect”, even though someone with this goal should donate everything to the single highest-utility charity.
If I have complete or near-complete trust in the information available to me about the charity’s utility, as well as its short-term sustainability, that seems like the right decision to make.
But if I don’t—if I’m inclined to treat data on overhead and estimates of utility as very noisy sources of data, out of skepticism or experience—is it irrational to prefer several baskets?
Similarly with knowledge and following reading lists, ideologies and the like.
The expected number of eggs lost is least if you choose the best basket and put all your eggs in it, but because of diminishing returns, you’re better off sacrificing a few eggs to reduce the variance. However, your charitable donations are such a drop in the ocean that the utility curve is locally pretty much flat, so you just optimise for maximum expected gain.
It follows from the assumption that you’re not Bill Gates, don’t have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
I don’t see how either of these affect this result—unless you’re saying it’s easier to visualise one person with clean water and another with a malaria net than it is two people with clean water?
Consider scope insensitivity. The amount of “warm fuzzies” one gets from helping X numbers of individuals with a given problem does not scale even remotely linearly with X. Different actions to help with distinct problems, however, sum in a much closer to linear fashion (at least up to some point).
Ergo, “one person with clean water and another with a malaria net” feels intuitively like you’re doing more than “two people with clean water”.
I think it means: the sum of the feel-good points of giving one person clean water and another a malaria net will, for most people, be higher than the feel-good points of giving two people clean water.
I’d like to get right whatever it is I’m doing wrong here, so if anyone would like to comment on any problems they see with this or the parent comment (which are both scored 0) I’d be grateful for your input.
EDIT: since this was voted down, but I didn’t receive an explanation, I’m assuming it’s just an attack, and so I don’t need to modify what I do—thanks!
I suspect that the ability to visualize someone benefited by your action is often a proxy for being certain that your action actually helped someone, and that people often place additional value on that certainty. They might not be acting as perfectly rational economic agents in such cases, but I’m not sure I’d call such behavior irrational.
Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Robin is right, you are wrong. Robin is an economist explaining a trivial application of his field.
Robin is wrong (or actually, correct about inanimate baskets but not about agent baskets) and you are simply wrong.
When there is a possibility that your decision method is flawed in such a way that it can be exploited (at some expense), you have to diversify or introduce randomness, to minimize pay-off for development of exploit for your decision method, thus lowering the exploitation. Basic game theory. Commonly applied in e.g. software security.
No, you are still failing to comprehend this point (which applies here too).
I comprehend that point. I also comprehend other issues:
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small.
Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Robin is applying said game theory correctly. You are not. More precisely Robin applied game theory correctly 3 years ago.
Geez. shouting match. Once again: you’re wrong, and from what I know, you may well be on something that you think boosts your sanity, but it really doesn’t.
Oh, that explains a lot. While the two accounts had displayed similar behavioral red-flags and been relegated to the same reference class I hadn’t made the connection.
Well, I thought that giving this feedback could help. I’m about as liberal as it gets when it comes to drug use, but it must be recognized that there are considerable side effects to what he may be taking. You are studying the effects, right? You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation; this ought to serve as some form of evidence of side effects that are visible from outside.
And none of them so far bear on game theoretic minimaxing vs expected value maximizing.
You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation
You insult everyone here. Don’t go claiming this represents special insight on your part, even if one were to grant the other claims!
If you’re so confident you’re right, prove it rigorously (with, like, math). Otherwise, I’ll side with the domain expert over the guy claiming his interlocutor is on drugs any day of the week.
The payoff for exploit calculation is incredibly trivial; if everyone with a flaw diversifies between 5 charities then the payoff for determining and utilizing exploit is 1⁄5 of the payoff when one pays to the ‘top’ one. Of course there are some things that can go wrong with this, for instance it may be easier to exploit to the extent sufficient to get into the top 5, which is why it is hard to do applied mathematics on this kind of topic, not a lot of data.
What I believe would happen if the people were to adopt the ‘choose top charity, donate everything to it’ strategy, is that, since people are pretty bad at determining top charities, and do so using various proxies of performance, and have systematic errors in the evaluation, most of people would just end up donating to some sort of super-stimuli of caring with which no one with truly the best intentions can compete (or to compete with which a lot of effort has to be expended on imitation of superstimuli).
I have made a turret in a game, that would shoot precisely where it expects you to be. Unfortunately, you can easily outsmart this turret’s model of where you could be. Adding random noise to the bullet velocity dramatically increases the lethality of this turret, even though under the turret’s model of your behaviour it is now not shooting at the point with the highest expected damage. It is very common to add noise or fuzzy spread to eliminate undesirable effects of the predictable systematic error. I believe that one should diversify among several of the subjectively ‘best’ charities, within the range from the best comparable to the size of systematic error in the process of determination of the best charity.
It follows from the assumption that you’re not Bill Gates, don’t have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
for instance it may be easier to exploit to the extent sufficient to get into the top 5
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all. And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.
As expected, you ignored the assumption that “charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.”
No, I am not. I am expecting that the mechanism you may use to determine expected utilities has low probability of validity (low external probability of argument, if you wish) and thus you should end up assigning very close expected utilities to the top charities, simply due to the discounting for your method imprecision. It has nothing to do with some true frequentist expected utilities that charities have.
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
It has nothing to do with some true frequentist expected utilities that charities have.
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
What assumption? I am considering the real world donation case. People being pretty bad at choosing top charities, meaning, very poor correlation between people’s idea of top charity and actual charity quality.
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
Well, I am not aware of a post by you where you say that you take drugs to improve sanity, and describe the side effects of the drugs in some detail that is reminiscent of the very behaviour you display. And if you were to make such a post, and if I were to read it, if I would see you having something matching the side effects you described, I would probably mention it.
To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small. Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Doesn’t follow. If you have a bunch of charities where the difference in expected payoff is the same, donating any one of them has the same expected value as splitting your donation among all of them. If you have a charity with a even slightly higher expected payoff, you should donate all of your money to that one, since the expected value will be higher.
E.g.: Say that Charity A, Charity B...Charity J can create 10 utilons per dollar. Ergo, if you have $100, donating $100 to any of the ten charities will have an expected value of 1000 utilons. Donating $10 to each charity will also have an expected value of 1000 utilons. Now suppose Charity K comes on to the scene, with an expected payoff of 12 utilons per dollar. Donating your $100 to Charity K is the optimal choice, as the expected value is 1200 utilons.
But if I don’t—if I’m inclined to treat data on overhead and estimates of utility as very noisy sources of data, out of skepticism or experience—is it irrational to prefer several baskets?
Very much so. Rational behavior is to maximize expected utility. When rational agents are risk-averse, they are risk-averse with respect to something that suffers from diminishing returns in utility, so that the possibility of negative surprises outweighs the possibility of positive surprises. “Time spent reading material from good sources” is a plausible example of something that has diminishing returns in utility so you want to spread it among baskets. Utility itself does not suffer from diminishing returns in utility. (Support to a charity might, but only if it’s large relative to the charity. Or large relative to the things the charity might be doing to solve the problem it’s trying to solve, I guess.)
In the case of reading, I can see the benefit of not putting all of your eggs in one basket. All of us have biases, however hard we try not to and by reading the same books you are maybe allowing your biases to be shaped along the same lines as Eliezers. By more of your formative reading being outside of this, you increase your chance of being able to challenge these biases.
This is especially true if you want to write in the same area as Eliezer as it increases your ability to contribute in a different way.
The other thing is the Outside View summed up by the proverb “If two people think alike, one of them isn’t thinking.” In the majority of cases I observe … that person is a sheeple and that leader has a cult.
Do you have a mechanistic unpacking (even a guess would be helpful) of what it is to be a “sheeple” or a “cult”, and of what harms come from being a “sheeple”? Given Aumann, I’m more inclined to say that if two people have different beliefs, at least one of them isn’t thinking.
That said, your point about respecting outside views is reasonable. Are you trying to avoid replacing the outside-presumed “badness” of cults/sheeple with understood mechanisms, so as to retain any usefulness that might be in the received heuristics and that you might not understand the mechanisms behind?
It would be great to add a link to the article on charitable giving you refer too to see if they already conclude or dismiss my idea on the issue. From observations of those around me I tend to see the reason behind charitable giving as something other than maximizing the utility of the charitable gift. I postulate that people give to many different charities as a social signal. The contributor is signaling to those who are receiving the gift that they sympathize with the cause. The contributor is also signaling to those around them that they are a caring and compassionate person. The quantity of the gift has an almost negligible effect on this signaling. So the more times someone gives, and the more charities they give too allows them to signal positive social mores more often and to a larger audience, increasing their social status higher, than if they gave all their expendable money to one charity a limited number of times.
I read recently an article on charitable giving which mentioned how people split up their money among many different charities to, as they put it, “maximize the effect”, even though someone with this goal should donate everything to the single highest-utility charity. And this seems a bit like the example you cited where, if blue cards came up randomly 75% of the time and red cards came up 25% of the time, people would bet on blue 75% of the time even though the optimal strategy is blue 100%. All this seems to come from concepts like “Don’t put all your eggs in one basket”, which is a good general rule for things like investing but can easily break down.
I find myself having to fight this rule for a lot of things, and one of them is beliefs. If all of my opinions are Eliezer-ish, I feel like I’m “putting all my eggs in one basket”, and I need to “diversify”.You use book recommendations as a reductio, but I remember reading about half the books on your recommended reading list, thinking “Does reading everything off of one guy’s reading list make me a follower?” and then thinking “Eh, as soon as he stops recommending such good books, I’ll stop reading them.”
The other thing is the Outside View summed up by the proverb “If two people think alike, one of them isn’t thinking.” In the majority of cases I observe where a person conforms to all of the beliefs held by a charismatic leader of a cohesive in-group, and keeps praising that leader’s incredible insight, that person is a sheeple and that leader has a cult (see: religion, Objectivism, various political movements). I respect the Outside View enough that I have trouble replacing it with the Inside View that although I agree with Eliezer about nearly everything and am willing to say arbitrarily good things about him, I’m certainly not a cultist because I’m coming to my opinions based on Independent Logic and Reason. I don’t know any way of solving this problem except the hard way.
I tried to start a Hofstadter cult once. The first commandment was “Thou shalt follow the first commandment.” The second commandment was “Thou shalt follow only those even-numbered commandments that do not exhort thee to follow themselves.” I forget the other eight. Needless to say it didn’t catch on.
You just didn’t give it enough time. Remember, it always takes longer than you expect!
See also Robin Hanson’s post on Echo Chamber Confidence.
I think of all the people who have ever recommended books to me, Eliezer has the most recommendations which I’ve actually followed. In most of my circle socials, I’m the “smart one”, but I’m nowhere near as smart as Eliezer (or most other people on LessWrong, it seems). So I do admire EY a lot. I want to be as smart as he is, and so I try reading all the books he has read.
And it kills me, because I also remember his post about novice editors copying the surface behavior of master editors, without integrating the deep insight, and I know that by reading the same science fiction novels EY has read, I’m committing exactly the same sin. But I don’t know what else I can do to try to improve myself.
If I have complete or near-complete trust in the information available to me about the charity’s utility, as well as its short-term sustainability, that seems like the right decision to make.
But if I don’t—if I’m inclined to treat data on overhead and estimates of utility as very noisy sources of data, out of skepticism or experience—is it irrational to prefer several baskets?
Similarly with knowledge and following reading lists, ideologies and the like.
Yes, even with great uncertainty, you should still put all your eggs into your best basket.
Did you mean this as a general rule, or specifically about this topic?
The literal example of eggs seems to indeed work well with multiple baskets, especially if they’re all equally good.
Specifically on this topic.
The expected number of eggs lost is least if you choose the best basket and put all your eggs in it, but because of diminishing returns, you’re better off sacrificing a few eggs to reduce the variance. However, your charitable donations are such a drop in the ocean that the utility curve is locally pretty much flat, so you just optimise for maximum expected gain.
This follows from the expected utility of the sum being the sum of the expected utility?
It follows from the assumption that you’re not Bill Gates, don’t have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
And that you care only about the benefits you confer, not the log of the benefits, or your ability to visualize someone benefited by your action, etc.
I don’t see how either of these affect this result—unless you’re saying it’s easier to visualise one person with clean water and another with a malaria net than it is two people with clean water?
The sum of the affect raised is greater.
I don’t understand I’m afraid, can you unpack that a bit please? Thanks.
Consider scope insensitivity. The amount of “warm fuzzies” one gets from helping X numbers of individuals with a given problem does not scale even remotely linearly with X. Different actions to help with distinct problems, however, sum in a much closer to linear fashion (at least up to some point).
Ergo, “one person with clean water and another with a malaria net” feels intuitively like you’re doing more than “two people with clean water”.
Well, not when you compare them against each other, but only when each is considered on its own: it’s like this phenomenon.
I think it means: the sum of the feel-good points of giving one person clean water and another a malaria net will, for most people, be higher than the feel-good points of giving two people clean water.
I’d like to get right whatever it is I’m doing wrong here, so if anyone would like to comment on any problems they see with this or the parent comment (which are both scored 0) I’d be grateful for your input.
EDIT: since this was voted down, but I didn’t receive an explanation, I’m assuming it’s just an attack, and so I don’t need to modify what I do—thanks!
I suspect that the ability to visualize someone benefited by your action is often a proxy for being certain that your action actually helped someone, and that people often place additional value on that certainty. They might not be acting as perfectly rational economic agents in such cases, but I’m not sure I’d call such behavior irrational.
Doesn’t matter how we call a behavior. If it can be improved, it should be.
Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Robin is right, you are wrong. Robin is an economist explaining a trivial application of his field.
Robin is wrong (or actually, correct about inanimate baskets but not about agent baskets) and you are simply wrong.
When there is a possibility that your decision method is flawed in such a way that it can be exploited (at some expense), you have to diversify or introduce randomness, to minimize pay-off for development of exploit for your decision method, thus lowering the exploitation. Basic game theory. Commonly applied in e.g. software security.
No, you are still failing to comprehend this point (which applies here too).
Robin is applying said game theory correctly. You are not. More precisely Robin applied game theory correctly 3 years ago.
I comprehend that point. I also comprehend other issues:
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small.
Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Geez. shouting match. Once again: you’re wrong, and from what I know, you may well be on something that you think boosts your sanity, but it really doesn’t.
Stay classy, Dmytry!
Oh, that explains a lot. While the two accounts had displayed similar behavioral red-flags and been relegated to the same reference class I hadn’t made the connection.
Thanks Gwern.
Well, I thought that giving this feedback could help. I’m about as liberal as it gets when it comes to drug use, but it must be recognized that there are considerable side effects to what he may be taking. You are studying the effects, right? You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation; this ought to serve as some form of evidence of side effects that are visible from outside.
And none of them so far bear on game theoretic minimaxing vs expected value maximizing.
You insult everyone here. Don’t go claiming this represents special insight on your part, even if one were to grant the other claims!
If you’re so confident you’re right, prove it rigorously (with, like, math). Otherwise, I’ll side with the domain expert over the guy claiming his interlocutor is on drugs any day of the week.
Posted on this before:
http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/5y64
The payoff for exploit calculation is incredibly trivial; if everyone with a flaw diversifies between 5 charities then the payoff for determining and utilizing exploit is 1⁄5 of the payoff when one pays to the ‘top’ one. Of course there are some things that can go wrong with this, for instance it may be easier to exploit to the extent sufficient to get into the top 5, which is why it is hard to do applied mathematics on this kind of topic, not a lot of data.
What I believe would happen if the people were to adopt the ‘choose top charity, donate everything to it’ strategy, is that, since people are pretty bad at determining top charities, and do so using various proxies of performance, and have systematic errors in the evaluation, most of people would just end up donating to some sort of super-stimuli of caring with which no one with truly the best intentions can compete (or to compete with which a lot of effort has to be expended on imitation of superstimuli).
I have made a turret in a game, that would shoot precisely where it expects you to be. Unfortunately, you can easily outsmart this turret’s model of where you could be. Adding random noise to the bullet velocity dramatically increases the lethality of this turret, even though under the turret’s model of your behaviour it is now not shooting at the point with the highest expected damage. It is very common to add noise or fuzzy spread to eliminate undesirable effects of the predictable systematic error. I believe that one should diversify among several of the subjectively ‘best’ charities, within the range from the best comparable to the size of systematic error in the process of determination of the best charity.
From this list
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
(edit: quickly fixed some errors)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all.
And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.
As expected, you ignored the assumption that “charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.”
No, I am not. I am expecting that the mechanism you may use to determine expected utilities has low probability of validity (low external probability of argument, if you wish) and thus you should end up assigning very close expected utilities to the top charities, simply due to the discounting for your method imprecision. It has nothing to do with some true frequentist expected utilities that charities have.
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
What assumption? I am considering the real world donation case. People being pretty bad at choosing top charities, meaning, very poor correlation between people’s idea of top charity and actual charity quality.
Well, I am not aware of a post by you where you say that you take drugs to improve sanity, and describe the side effects of the drugs in some detail that is reminiscent of the very behaviour you display. And if you were to make such a post, and if I were to read it, if I would see you having something matching the side effects you described, I would probably mention it.
To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).
Doesn’t follow. If you have a bunch of charities where the difference in expected payoff is the same, donating any one of them has the same expected value as splitting your donation among all of them. If you have a charity with a even slightly higher expected payoff, you should donate all of your money to that one, since the expected value will be higher.
E.g.: Say that Charity A, Charity B...Charity J can create 10 utilons per dollar. Ergo, if you have $100, donating $100 to any of the ten charities will have an expected value of 1000 utilons. Donating $10 to each charity will also have an expected value of 1000 utilons. Now suppose Charity K comes on to the scene, with an expected payoff of 12 utilons per dollar. Donating your $100 to Charity K is the optimal choice, as the expected value is 1200 utilons.
Very much so. Rational behavior is to maximize expected utility. When rational agents are risk-averse, they are risk-averse with respect to something that suffers from diminishing returns in utility, so that the possibility of negative surprises outweighs the possibility of positive surprises. “Time spent reading material from good sources” is a plausible example of something that has diminishing returns in utility so you want to spread it among baskets. Utility itself does not suffer from diminishing returns in utility. (Support to a charity might, but only if it’s large relative to the charity. Or large relative to the things the charity might be doing to solve the problem it’s trying to solve, I guess.)
In the case of reading, I can see the benefit of not putting all of your eggs in one basket. All of us have biases, however hard we try not to and by reading the same books you are maybe allowing your biases to be shaped along the same lines as Eliezers. By more of your formative reading being outside of this, you increase your chance of being able to challenge these biases.
This is especially true if you want to write in the same area as Eliezer as it increases your ability to contribute in a different way.
Do you have a mechanistic unpacking (even a guess would be helpful) of what it is to be a “sheeple” or a “cult”, and of what harms come from being a “sheeple”? Given Aumann, I’m more inclined to say that if two people have different beliefs, at least one of them isn’t thinking.
That said, your point about respecting outside views is reasonable. Are you trying to avoid replacing the outside-presumed “badness” of cults/sheeple with understood mechanisms, so as to retain any usefulness that might be in the received heuristics and that you might not understand the mechanisms behind?
By sheeple and cult, I mean people whose good judgment is clouded by the mechanisms described in the Affective Death Spiral sequence.
It would be great to add a link to the article on charitable giving you refer too to see if they already conclude or dismiss my idea on the issue. From observations of those around me I tend to see the reason behind charitable giving as something other than maximizing the utility of the charitable gift. I postulate that people give to many different charities as a social signal. The contributor is signaling to those who are receiving the gift that they sympathize with the cause. The contributor is also signaling to those around them that they are a caring and compassionate person. The quantity of the gift has an almost negligible effect on this signaling. So the more times someone gives, and the more charities they give too allows them to signal positive social mores more often and to a larger audience, increasing their social status higher, than if they gave all their expendable money to one charity a limited number of times.