It follows from the assumption that you’re not Bill Gates, don’t have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
From this list
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
(edit: quickly fixed some errors)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.