I guess it’s necessary to get a bit technical to explain what I mean by that. I do not mean that the number of maximally-good things is small; that is true, but will be true in most environments.
What I mean is that the distribution has a crazy variance (possibly no finite variance); take two “opportunities to do good” and compare them to each other, and an orders-of-magnitude difference is not rare.
The water-in-the-desert analogy really falls apart at that point. It’s more like an investor looking for a good startup to invest in; successful startups aren’t that rare, but the quality varies immensely; you’d much much much prefer to invest in “the next Google/Uber/etc” rather than the next [insert some company from 2010 which made a good profit but which you and I have never heard of].
If I’m reading this correctly, then generally we’re seeing a rather flat payoff curve over most “do good opportunities” and the rare max should stand out like a sore thumb when taking a good look. So those really should be things do-gooders will jump on quickly. (Note, that doesn’t mean they are done quickly or that additional assistance is not important.)
While not as obvious, it probably also means that a lot of more mundane opportunities are getting ignored. That comes from an insight offered in one of my classes from years back asking why so much clumping (think fad type stuff here) exists when the marginal utility of the consumed good is pretty much equal to all the other goods that could have been consumer. In other words, when the opportunity cost is zero why is everyone doing the same thing?
I suspect we could see something like that in the “do good” space. Therefore, taking the path not followed could be a very good thing.
What I mean is that the distribution has a crazy variance (possibly no finite variance); take two “opportunities to do good” and compare them to each other, and an orders-of-magnitude difference is not rare.
Do you mean the differences between the expected utility upfront? Or do you mean the differences between the actual utility in the end (which the actor might have no way to accurately predict in advance)?
Opportunities to do the most good are.
I guess it’s necessary to get a bit technical to explain what I mean by that. I do not mean that the number of maximally-good things is small; that is true, but will be true in most environments.
What I mean is that the distribution has a crazy variance (possibly no finite variance); take two “opportunities to do good” and compare them to each other, and an orders-of-magnitude difference is not rare.
The water-in-the-desert analogy really falls apart at that point. It’s more like an investor looking for a good startup to invest in; successful startups aren’t that rare, but the quality varies immensely; you’d much much much prefer to invest in “the next Google/Uber/etc” rather than the next [insert some company from 2010 which made a good profit but which you and I have never heard of].
If I’m reading this correctly, then generally we’re seeing a rather flat payoff curve over most “do good opportunities” and the rare max should stand out like a sore thumb when taking a good look. So those really should be things do-gooders will jump on quickly. (Note, that doesn’t mean they are done quickly or that additional assistance is not important.)
While not as obvious, it probably also means that a lot of more mundane opportunities are getting ignored. That comes from an insight offered in one of my classes from years back asking why so much clumping (think fad type stuff here) exists when the marginal utility of the consumed good is pretty much equal to all the other goods that could have been consumer. In other words, when the opportunity cost is zero why is everyone doing the same thing?
I suspect we could see something like that in the “do good” space. Therefore, taking the path not followed could be a very good thing.
Do you mean the differences between the expected utility upfront? Or do you mean the differences between the actual utility in the end (which the actor might have no way to accurately predict in advance)?
I reject the principled distinction. To me, it’s more of a spectrum.