Why I Just Took The Giving What We Can Pledge
Crosspost of my blog article.
I just took the Giving What We Can pledge, which is a pledge to give away at least 10% of my income over the course of my life to highly effective charities. I think you should take it too!
We live in a time of unprecedented prosperity and abundance. Americans today live like kings in the ancient past. We have access to technology that would have been considered magic even just a few hundred years ago. Even relatively poor Americans are richer than nearly everyone who has ever lived.
At the same time, the world is filled with huge and pressing problems. Children whose parents survive on two dollars a day die in their beds, because their parents can’t afford medical care. Animals by the billions are tortured in factory farms. Existential threats imperil the entire future.
For this reason, I pledged to take a small haircut (at least 10% of my income) to address the world’s most pressing problems. For a while I’ve been giving more than 10% of my earnings to effective charities (somewhere around 20%). But now, for the first time, I took the official pledge.
It’s staggering just how much good one person can do. You can help many animals per dollar by donating to effective animal charities. For a few thousand dollars you can save a person’s life (and if you buy my weird insect arguments, each dollar given to GiveWell can save thousands of insects from a lifetime of suffering). And money donated to Longtermist charities can potentially, in expectation, bring about many lifetimes of bliss per dollar.
There’s a lot about ethics that isn’t obvious. It’s hard to know whether utilitarianism is the right ethical theory, whether desert is real, and which theory of well-being is right. But other ethical issues aren’t difficult. Whether people should give some of their money to prevent immense needless death and suffering is among the least tricky ethical issues. If you give away, say, 10% of your earnings, and earn 50,000 dollars a year, you can save a whole, entire human life every year.
(You can prevent someone like the adorable child displayed in the photo above from dying every year by giving away 10% of your income!)
That’s insane! It’s incredible that we have such profound ability to do good. We’ve all lost loved ones, and we all know how tragic it is for a life to be lost. You can prevent that every single year if you earn the median U.S. income and donate 10%. And for the same amount of money, every year you can spare thousands of animals from a lifetime in a cage or save 75 million shrimp from a painful death! By giving away 10% of the income of the median American every year, you can help a number of animals greater than the human population of the U.K..
One of life’s greatest tragedies is the loss of a child. It’s a tragedy so profound, it leaves a lasting impact on all the people who ever knew the child. The world no longer has the child in it, no longer with their laughs or cries, no longer with the ability to grow older, make friends, and fall in love. There is a void where the child used to be.
I signed the Giving What We Can pledge because I recognize that routinely preventing that scale of tragedy is worth spending at least 10% of my income on.
I think you should do the same.
Part of the reason is altruistic. By donating even modest amounts of money, you can prevent horrifying things from happening to lots of others. But even if you’re self-interested, you should still take the pledge. Those who give more tend to be happier and more fulfilled.
The world is gripped by a profound crisis of meaning. Lots of people feel hollow and empty, despite advanced technology and unprecedented wealth. Though their momentary existence is pleasant, it’s not connected to a deeper narrative. There’s nothing deeper they work towards. Making helping others a significant part of your life is an excellent way to make your life meaningful. A self-centered life is nihilistic and meaningless.
You’ll live a happier life if you feel like your life is working towards some important goal. If when you turn 80, you know you’ve helped hundreds of people and millions of animals, you’ll be able to look back on your life with deep pride. You won’t, like so many, feel you’ve wasted your life on social media, at parties, and in pursuit of cheap pleasure. You’ll know you spent your life on something greater, something that mattered, something truly important.
You’ll know that there are adults in the world who only survived childhood because of your donation. You’ll know that there are thousands of animals who didn’t have to rot in a cage because of your donation. You’ll know that you worked towards making the world a better place. When your time finally comes, you’ll be able to look back on all the ways you made the world better and smile.
I am not suggesting you give away all your money, bringing yourself down to the poverty line for this. But at the very least, I suggest you give 10%. Helping others should be a non-trivial part of a life well lived. Even if you earn the median U.S. income, after giving away 10% of your income, you’ll still be one of the richest few percent of people who ever lived.
Ultimately, I took the pledge because saving others from death and suffering is worth more than a bit of extra expenditure. Others matter, whether they’re far away people or animals of a different species. They matter enough, that I plan, for the rest of my life, to give away some portion of my money to helping them, and I hope you do the same!
Anyone who takes the Giving What We Can pledge and sets up routine donations in response to this article gets an automatic subscription to the blog!
A third comment, because it’s a third unrelated topic:
The quantity of downvotes suggests that you have developed an anti-fan club here. You’d expect to encounter resistance if you are saying true and important things that make people uncomfortable. But having a devoted club of detractors might be a sign that you could think about your messaging strategy. What if you could say true and important things in a way that doesn’t make people so uncomfortable that they refuse to think about them?
LessWrongers love epistemic humbleness and wriiting meant to inform rather than persuade.
As I said in my other comment, I think the downvotes are uncalled for and emotionally motivated. But perhaps this is a hint to what tone would reach past those barriers and actually change more minds?
A couple specific thoughts, even though I haven’t thought about this much:
In this piece, the claim that even a selfishly motivated person might benefit by taking the pledge is very interesting. More evidence and argument on this point might be useful.
Your presentation is already oriented around the opportunity (positive-valenced) rather than the obligation (negative-valenced and therefore unpleasant to think about) of doing good. The descriptions of tragedy and suffering are definitely negative-valenced.
The statements “I think you should too” may offer the reader an excuse to take offense. “Should” is a very vague term and usually connected to an unstated value judgment. Here you may mean “if you did this it would improve the situation by your own current value judgment”, the least suspicious use of “should”.
In my experience, changing minds requires gently presenting arguments in a way that does not provide too much impetus or excuse for the listener to avoid thinking about them, walking away, and hoping that the person convinces themselves. This post on how to actually change minds definitely rang true to me: https://www.lesswrong.com/posts/D2GrrrrfipHWPJSHh/book-review-how-minds-change
Anyway those are just some random thoughts; do with them what you will.
To the substance: If you’re that serious about utlilitarianism/ doing good, why aren’t you focused on AI alignment? I personally find the argument that we’re sitting at the hinge of history and can influence the entire future of the lightcone, creating an amount of joy that truly boggles the mind if we succeed. This logic would appear to make work toward success on creating aligned and beneficial AGI dwarf all other charitable works, even if the probabilities and timelines are off by several orders of magnitude. The real possibility that this is happening soon and currently hangs on a knifes edge isn’t even necessary to consider.
Perhaps you feel unequipped to help with that project? It would still make sense to devote your efforts to getting equipped.
Yeah I’m not really equipped to do AI alignment and I have lower P doom than others, but I agree it’s important and it’s one of the places I donate.
Who goes around downvoting this stuff without saying what they object to? It’s a free world and you can downvote what you want, but to me it looks suspiciously like shouting SHUT UP SHUT UP SHUT UP when someone is making arguments that are reasonable, but you find uncomfortable to hear.
Where’s the counterargument or community norm violation? This looks well-written, rational, and important-if-true, prime stuff for this community. It’s not a new argument, but that doesn’t usually lead to such an enthusiastic downvoting.
My general experience is that anything touted as important-if-true is neither. I find this to be as reliable a rule as Betteridge’s Law of Headlines (if the headline is a question the answer is “no”). Indeed, the two rules are closely related: the headlines “Scientists may have found…” and “Have scientists found…?” are interchangeable.
I agree. I pass up pieces with those titles. But I don’t think it applies to this piece. The author didn’t pitch it that way, I did. And in fact I think it is true and important, and that there is a lot of emotional hang up and resistance to accepting that we are all falling far, far short of the level of ethics we’d like to imagine we have. I have not taken the giving what we can pledge, so I don’t exactly agree with the author’s conclusions and I don’t think the logic is nearly tight enough. But I think the question of whether we should all take that pledge or similar things is very much an open one; the claim that we’d be happier if we did seems quite plausible to me and very much worthy of debate on lesswrong. I’m disturbed to see it get downvotes instead of debate; I think if a less sensitive and equally important topic was written about this poorly it would be treated much more kindly. The claim that this has been debated to death already so we need not engage with repetitive and bad versions seems simply false to me. I have lived on less wrong for the last 3 years and seen no serious debate of this topic in that time, nor a hint of an accepted consensus.
I happen to think that solving alignment trumps all other ethical concerns, so I’m not going to be the one to do a better treatment here. But I am disappointed in the community for being so hostile to this person’s attempts.
See this discussion for why downvotes-without-explanations are common these days.
I have previously critiqued Bentham’s Bulldog’s writing on here. This seems also relatively low-quality along similar lines as previous critiques. You should not expect an explanation for each individual instance (and more broadly, demanding explanations, especially given the reasonable expectation of follow-up and associated stress, is quite costly).
I care about many EA principles, and my reaction is definitely not “SHUT UP SHUT UP SHUT UP” about arguments I am uncomfortable about hearing, it’s just that this author just makes a lot of really low-quality arguments that mostly consist of applause lights. There are many great authors writing about similar ideas who I do respect a lot.
Can you give an example? I addressed your previous (in my view, quite unpersuasive) objections at some length https://benthams.substack.com/p/you-cant-tell-how-conscious-animals
Sure
(Edit: To clear up confusion, I responded before Bentham’s Bulldog edited their comment, which originally just said “Can you give an example?”)
My post is a response to the arguments you make in that comment! I find the notion somewhat absurd that my thinking is disreputably poor because I go by the results of the most detailed report on animal consciousness done to date, but it doesn’t accord with your unreflective intuitions (which are largely influenced by a host of obviously irrelevant factors like size!) It wouldn’t seem so unintuitive that giant prehistoric arthropods rolling around in pain were conscious!
I will not repeat the critique I have already made over there here again. You asked for an example, which I gave you.
I also don’t appreciate you calling my intuitions “unreflective” or to assert invariants about those intuitions which I have not expressed (like that “size” inherently matters, which I mean, it clearly does due to at least Landauer limits, but I agree is not particularly strong evidence, but which I have not take any previous stance on).
(I am not intending to respond further)
Your example of me being obviously wrong is that you have an intuition that the numbers I rely on, from the most detailed report to date, are wrong.
Size likely correlates slightly with mental complexity, but not to the extent it affects our intuitions. The gulf between bees and fish mentally is pretty small, while the gulf between bees and fish in terms of our intuitions about consciousness is very large. I was making a general claim about people’s sentience intuitions not yours.
Probably unreflective was the wrong word—direct would have been better. What I meant was that you weren’t relying on any single model, or average of models, or anything of the sort. Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.
(I also find it a bit rich that you are acting like my comment is somehow beyond the pale, when I’m responding to a comment of yours that basically amounts to saying my arguments are consistently so idiotic my posts should be downvoted even when they don’t say anything crazy).
To think insect expected sentience is very low, then you have to be very confident their sentience is low. Such great confidence would require some very compelling argument for why even dramatic behavior isn’t indicative of much sentience. Suffice it to say, I don’t see an argument like that, and I think there are plenty of reasons to think it’s reasonably likely insects feel intense pain.
I will again reiterate that a report being “detailed” really doesn’t mean approximately anything. You are using it as some kind of weird argument from authority. The most “detailed” reports on the moral value of animals are probably some random religious texts in the Vatican about whether animals have souls. Do I care? No.
No, I am not doing that. I am using different arguments, that seem more robust to me, which are grounded in my own best guesses of how morality works. I absolutely do not think that I can intuitively and directly assess, without the need for sophisticated study, the internal experience of any kind of cognitive system. Indeed, I suggested neuron count myself as one such method! Neuron count is vaguely associated with size, and IMO just a strictly better proxy than it (indeed, my guess is accounting for neuron count the remaining size correlation becomes negative because of greater need to have enough brain to control the bigger body, which reduces available compute to do more morally relevant computation).
No, that is not how epistemology works. The prior on every hypothesis this specific is extremely low. Strong evidence is common. You do not need some “very compelling argument”, as there is no obvious prior to use for this statement space. I have no trouble coming up with 10^50 statements that I assign as low probability to as 7 bees being more morally relevant than humans.
There are just a lot of ways to cut up morality, and just because you raised one specific hypothesis to consideration that might have really big implications doesn’t mean someone now needs to provide overwhelming evidence. Bits add up really quickly. It’s just really not that hard to build confidence that can overcome a factor of a billion (and indeed it is very rare for a hypothesis to be anywhere else but at approximately 0% or approximately 100%).
I thought you weren’t planning on responding!
If you’re going to rely on neuron counts, you should engage with the arguments RP gives against neuron counts that are, to my mind, very decisive. It’s particularly unreasonable to rely on neuron counts in a domain like this where there’s lots of uncertainty. If a model tells you A matters less than B by a factor of 100,000 or something, most of the expected value of B relative to A is in possible worlds where the model is wrong. https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument
To use neuron counts as a proxy for even simple creatures, you have to be extremely confident—north of 99% confident—that the right proxy only assigns very minimal consciousness to animals. But it’s not clear what justifies this overwhelming confidence.
Analogy: if you have a model that predicts aliens being only one millimeter in size, even if you’re pretty sure it’s right, you shouldn’t use it as a proxy for expected alien size, because the overwhelming majority of expected size is in worlds where the model is wrong.
Why is the hypothesis that bees are more than insignificantly conscious a highly specific prior with insignificant odds? We know that humans are capable of intense pain. There is some neural architecture that produces intense pain. What gives us immense confidence that this isn’t present in animals? Being confident a priori that insects don’t feel intense pain at a billion to one odds is silly—it’s like being confident a priori that insects don’t have a visual cortex. It’s not like there’s some natural paramaterization of possible physical states that give rise to consciousness where only a tiny portion of them entail insect consciousness.
As an aside, I think people take the long lesson away from the Mark Xu essay. Specific evidence gives a very high Bayes factor. The reason someone saying their name is Mark Xu gets such a high Bayes factor is that Mark Xu is a very specific name—as all specific names are. But a person merely asserting some proposition isn’t good for any comparable Bayes factor. For more see http://www.wall.org/~aron/blog/just-how-certain-can-we-be/
Also, as I have said several times, it’s not about aggregate considerations of moral worth but about intensity of valenced experience. It’s about how intense they feel pleasure and pain. I think a human’s life matters more than seven bees. Now, once again, it seems insane to me to start with a prior on the order of one in a billlion of bees feeling pain at least 1⁄7 as intensely as people. What licenses such great confidence?
My question, if we are going to continue this, is as follows:
Are you astronomically certain that insects aren’t conscious at all, or just not intensely conscious?
What licenses very high confidence in this? If it’s the alleged specificity of the hypothesis, what is the parameterization on which this takes up a tiny slice of probability space?
Also happy to have you on the podcast!
I refuse to demand astronomical certainty here, because that amounts to Pascal’s Mugging.
I don’t even have astronomical certainty that electrons aren’t conscious. Yet I wouldn’t say “there’s a 10^-20 chance that electrons are conscious, so based on the huge number of electrons that still adds up to a lot of suffering and batteries are murder.”
Right so you can discount extremely low probabilities. But presumably the odds of insects being conscious—a view believed by a large number of experts—isn’t low enough to fully discount.
Astronomical certainty, that’s another way of saying “even if the odds are really really low”. If you don’t actually think the odds are really really low, the astronomical certainty is irrelevant.