Donation offsets for ChatGPT Plus subscriptions

I’ve decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/​month).

I don’t have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:

They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.

It seems useful to recognize, to notice, when you’re contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.

A common Effective Altruism argument against offsets is that they don’t make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible. If you want to mitigate harms you are contributing to, you can offset by increasing your “doing good” budget, but it doesn’t make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.

I think this is a decently good point, but doesn’t move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you’re establishing a basis for coordination—other people can see you really care about the issue because you made a costly signal. This is similar for the reasons to be vegan or vegetarian—it’s probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.

After having used ChatGPT (3.5) and Claude for a few months, I’ve come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I’ve also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.

Unfortunately both can be true:

1) Language models are really useful and can help people learn, write, and research more effectively
2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential risk

I think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the “concerned about AI x-risk” reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/​month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models. But it is something, and I want to recognize that I’m contributing to this rapid scaling and deployment in some way.

Weighing all this together, I’ve decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets. I won’t dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.

I didn’t spend a lot of time deciding which orgs to donate to, but my reasoning is as follows: MIRI has a solid track record highlighting existential risks from AI and encouraging AI labs to act less recklessly and raise the bar for their alignment work. GovAI (the Center for AI governance) is working on regulatory approaches that might give us more time to solve key alignment problems. According to staff I’ve talked to, MIRI is not heavily funding constrained, but that they believe they could use more money. I suspect GovAI is in a similar place but I have not inquired.

Note: I wrote this on the EA forum and wasn’t sure whether to crosspost it. However, I realized this audience was the one I most wanted to see it, even though I have it categorized as kind of an “EA” topic, so decided to post it here too.