Saving the world sucks

I don’t want to save the world. I don’t want to tile the universe with hedonium. I don’t want to be cuckolded by someone else’s pretty network-TV values. I don’t want to do anything I don’t want to do, and I think that’s what (bad) EAs, mother Teresa, and proselytizing Christians all get wrong. Doing things because they sound nice and pretty and someone else says they’re morally good suuucks. Who even decided that warm fuzzies, QALYs, or shrimp lives saved are even good axes to optimize? Because surely everyone doesn’t arrive at that conclusion independently. Optimizing such universally acceptable, bland metrics makes me feel like one of those blobby, soulless corporate automata in bad tech advertisements.

I don’t see why people obsess over the idea of universal ethics and doing the prosocial thing. There’s no such thing as the Universal Best Thing, and professing the high virtue of maximizing happiness smacks of an over-RLHFed chatbot. Altruism might be a “virtue”, as in most people’s evolved and social environments cause them to value it, but it doesn’t have to be. The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.

There comes a time in most conscientious, top-of-distribution kids’ lives when they decide to Save the World. This is very bad. Unless they really do get a deep, intrinsic satisfaction from maximizing expected global happiness, they’ll be in for a world of pain later on. After years of spinning their wheels, not getting anywhere, they’ll realize that they hate the whole principle they’ve built their life around. That, deep down, their truest passion doesn’t (and doesn’t have to) involve the number of people suffering malaria, the quantity of sentient shrimps being factory farmed, or how many trillion people could be happy in a way they aren’t 1000 years from now. I claim that scope insensitivity isn’t a bug. That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to. You have the freedom to care about whatever you want and shouldn’t feel social guilt for not liking the same values everyone else does. Their values are just as meaningful (or meaningless) as yours. Peer pressure is an evolved strategy to elicit collaboration in goofy mesa-optimizers like humans, not an indication of some true higher virtue.

Life is complex, and I really doubt that what you should care about can be boiled down to something so simple as quality-adjusted life-years. I doubt it can be boiled down at all. You should care about whatever you care about, and that probably won’t fit any neat moral templates an online forum hands you. It’ll probably be complex, confused, and logically inconsistent, and I don’t think that’s a bad thing

Why do I care about this so much? Because I got stuck in exactly this trap at the ripe old age of 12, and it fucked me up good. I decided I’d save the world, because a lot of very smart people on a very cool site said that I should. That it would make me feel good and be good. That it mattered. The result? Years of guilt, unproductivity, and apathy. Ending up a moral zombie that didn’t know how to care and couldn’t feel emotion. Wondering why enlightenment felt like hell. If some guy promised to send you to secular heaven if you just let him fuck your wife, you’d tell him to hit the road. But people jump straight into the arms of this moral cuckoldry. Choosing and caring about your values is a very deep part of human nature and identity, and you shouldn’t let someone else do it for you.

This advice probably sounds really obvious. But it wasn’t for me, so I hope it’ll help other people too. Don’t let someone else choose what you care about. Your values probably won’t look exactly like everyone else’s and they certainly shouldn’t feel like a moral imperative. Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway. As an optimizing agent in an incredibly nebulous and dark world, the best you can do is what you think is personally good. There are lots of equally valid goals to choose from. Infinitely many, in fact. For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty. It feels good to learn more and uncover the truth, and I’m a hell of a lot happier and more effective doing that than puttering around pretending to care about the exact count of humans experiencing bliss. There are lots of other values too. You can optimize anything that speaks to you—relationships, cool trains and fast cars, pure hedonistic pleasure, number of happy people in the world—and you shouldn’t feel bad that it’s not what your culty clique wants from you. This kind of “antisocial” freedom is pretty unfashionable, especially in parts of the alignment/​EA community, but I think a lot more people think it than say it explicitly. There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.

Save the world if you want to, but please don’t if you don’t want to.