Changing the world for the worse

Link post

When I was 21, I was sucked into a world of ambition.

Starting my adult life in the Bay Area, I was surrounded by the sense that I was supposed to start a startup, change the world.

I never wanted to start a startup. Reading stories of famous founders, and living and working with startup founders myself, it seemed to me that the amount of belief you’d have to have in yourself and your idea bordered on insanity. Raised to value humility, and unable to even speak up for myself, it was a level of self-belief I couldn’t imagine ever reaching.

The version of changing the world that appealed to me was effective altruism. I didn’t have a grand vision; I just wanted to help people. The arguments for it seemed so simple, so obviously correct, when laid out in books and blog posts.

Right out of college, I joined an EA organization that worked with governments around the world on projects that cost tens of millions of dollars. One day at work, I was getting some beans and rice from the kitchen when I ran into a billionaire. All the money and power in the world were suddenly right there — and we were using them to save lives.

I coasted along for a time in that dream of changing the world for the better. I was young; many of us were. More than one person I knew had influence over millions of dollars before they were 25. The movement was young, too — too new to power to have yet stumbled into many of the pitfalls that come with it. As the movement grew faster and faster, accruing more followers, more money, and more political influence, it began to seem like we could do absolutely anything. It was a heady feeling.

Then, when I was 26, FTX collapsed. Suddenly, we all had to reckon with the effects of global-scale ambition. When it goes right, you can fund every charity and swing the election for Biden. When it goes wrong, you’ve been complicit in a criminal enterprise that shook the economy and fucked over a million people.

(I read Careless People last week, a memoir about how Facebook’s success put world-changing power in the hands of a few individuals, who were able to wield it almost entirely unchecked. When it goes right, you get democratic uprisings. When it goes wrong, you get genocide in Myanmar, and Trump as president.)

Around the same time as the FTX collapse, an AI arms race was beginning between OpenAI and Anthropic — two labs formed by people who’d been inspired by Bostrom’s Superintelligence, as we all were. By the logic of Superintelligence, it was just about the worst thing that could have happened.

People close to me were thrown into turmoil and depression. We’d done so much in the AI space, supporting and growing AI safety in all sorts of important ways — things that probably wouldn’t have happened without us. Now it seemed that all the investment that had gone into AI safety had had the primary effect of massively accelerating AI capabilities.

You try big things, you get big results.

I quit my EA job the month FTX collapsed, and I haven’t done anything in the space since. It wasn’t a big, dramatic, or even really deliberate decision. I was just burned out and disillusioned.

I still care about the world, and I’ve spent years feeling vaguely guilty that I’m no longer even pretending to work on its biggest problems. I thought I quit EA because I wanted to be happy (as an EA, I was constantly coercing myself to work on things that felt off to me, and was therefore constantly miserable). This felt like selfishness, or laziness. I struggled to justify myself in any other terms.

I don’t feel guilty anymore. I was talking about all this to a friend recently, and he said, “It seems plausible that the best thing to do if you really take AI x-risk seriously is to just stop working on AI at all.”

And that’s what I’ve been trying to say this whole time, whenever anyone asks me about my career. That I don’t want to try to have a big impact, if I can’t be certain that that impact will be positive rather than negative for the world — and I can’t be certain. To be certain of that would be hubris. Both in the memoirs I’ve read and in my real life, I’ve seen people who have genuinely wanted to change things for the better, gotten into the rooms where the sausage gets made, and ended up sickened by the consequences of what they were involved in.

EA funnels millions of dollars around. It funds career development for AI researchers who end up advancing capabilities at frontier labs. It funds insecticide-treated bed nets to protect people from malaria, and then those nets are used for fishing and pollute the waterways. The effect of the latter has been determined to be insignificant. The former, well, I guess it remains to be seen.