Any Utilitarianism Makes Sense As Policy

Link post

There’s been a lot of discussions around utilitarianism as of late in the blogoblob I hang out in. So fine, I’ll bite:

Most issues with any system of ethics arise when you try to make it absolute. Be it the repugnant conclusion, theodicy, solipsism, preference-towards non-existence; Or any other sort of driving axioms to infinity causing you to conclude seagulls should pluck out your eyeballs.


The problem is that all of these issues arise under at least 3 of the following assumptions:

  1. The system of ethics is to be implemented by an omnipotent being

  2. The system of ethics is to be agreed upon by all people and acted in accordance with at all times

  3. The actors implementing the system of ethics have perfect knowledge of the future

  4. The system of ethics is not to be revisited, modified, or replaced in the future

i—Negative Utilitarianism

Take as an example negative utilitarianism, which can be taken to say something like:

Suffering is the only thing we ought to concern ourselves with, our goal is to minimize suffering for all conscious and potentially conscious beings.

If one applies assumptions (4), (3) and (1) or (2), this leads to the main counter-argument to negative utilitarianism, which is:

Then, our ultimate goal ought to be eliminating all conscious life from the universe in such a way as to cause no suffering in the process.

This sounds horrible, but that’s fine because we are NOT talking about how to implement morality for an omnipotent, omniscient, and unchanging god. We are talking about how to implement morality for… a governing institution of some sort.

So how would a negative utilitarian government’s constitution sound like, starting from the same definition of the ideology?

We, the legislators, hereby agree that our main goal will be reducing suffering for every conscious being we know of. Under all currently commonly agreed upon definitions, such as minimizing the infliction of unwanted pain, trying to fulfill all fundamental needs (hunger, shelter, friendship) or needs which will be identified as fundamental.

In order to ensure this charter gets enacted, we shall prioritize our own constituents, as they are the ones giving us the power to act, but shall direct a portion of our effort towards those beings we think most underprivileged even if they are not under our governance.

We don’t care about preference fulfillment, so all forms of business and recreation will be unregulated by us in so far as they ensure no runoff suffering is caused by their practice. We will collect as much money from these business endeavors as we can, in return providing the stability and infrastructure of our state, and maintaining a fluctuating equilibrium where these businesses are happy to keep operating and paying taxes since those funds are our main tools for accomplishing our first order negative-utilitarian goals.


In other words, a negative utilitarian government is one that leaves citizens alone and collects taxes, intervening only to support those in most need and sustain the shared goods required for communities to form and thrive. In so far as it ventures to extend its reach, its prime goal will be high-utility actions such as finding ways to reduce or end factory farming, helping non-citizens that are most in need (e.g. going through starvation, hunger, and war) and create an infrastructure that will encourage more businesses to come under it and give it money,

Wait… is this like, just a very high functioning implementation of most liberal governments that already exist? Yes

Like, the kind of implementation most people would want no matter where they politically align themselves? Pretty much, the difference between, say, a US progressive and libertarian might be that they have slightly different definitions of “suffering”, different levels of trust in government efficiency or corporate ethics, and different conceptions about the tax regime business could operate under. But I think either would agree a charter like the above is, in principle, one they’d endorse.

ii—Applied To Other Systems

Christian morality is really bad, if God were to exist and enforce it, it’s nonsensical and cruel.

Happily enough, to the best of our knowledge, God doesn’t exist and doesn’t enforce it. Instead, it gets implemented by institutions like Churches. Which, historically speaking, were actually pretty nifty… funding art, preserving science and knowledge, helping the poor, helping mediate conflicts, and fostering a sense of kinship and community.

Kantian morality is really bad if one tries to follow it directly, but it’s really good if codified in a system of law… say, the common law; Established hundreds of years before Kant, still considered a fairly solid system of law, and having as its core tenant trying to reach an approximation of Kantian ethics without assuming what a “moral” action actually is, instead leaving that up to a random sample of the population (the jury), and striving to simply apply it as consistently as possible in all cases (i.e. respecting the categorical imperative).

Preference utilitarianism is horrible if you assume you’ve got God-like powers and a perfect understanding of the brain, then decide to tile the universe with neuronal tissue bathed in endorphins. But, if applied at, say, the policy level of a large corporation, like Amazon… the results are, eh, on the whole pretty ok? (I know it’s popular to hate on amazon, so feel free to pick any other e-commerce or delivery business that you think gets this better)

iii—Reconsideration

Also, because systems of ethics apply to individuals and institutions, not to… an omnipotent, ominscient god. This means we can revisit them.

Assume that in 50 years from now we’ve basically eradicated all infectious disease, and war, and hunger, and rampant pollution, and factory farming.

We realize our negative-utilitarian government is now focusing on dumb and silly questions like “are parasitic wasps causing more suffering to ants then the suffering killing all parasitic wasps with bioengineered fungi would take”, since all low hanging fruits are taken.

At that point the government can just say:

Guys, this is getting kind of silly, and nobody’s intuition is aligned with what we’re doing anymore. Should we switch to a preference utilitarian charter that will guide us towards colonizing space, unlocking the mysteries of reality, and finding new and amazing peak experiences that people can partake in?

Or its citizens can slowly shift their support towards institutions that do that.

In practice this is less efficient and looks more like:

In the last 200 years the Catholic church has switched to the business of colonization, taking away women’s rights, hiding pedophilia, and funding extremists … so we should probably start donating to the against malaria foundation and voting for secular politicians.

With loads of “bad” being generated while the change was happening, and loads of “bad” still being generated as the dying remnants of the institution struggled to maintain power, becoming ever more destructive and zero-sum.

This is unideal, but there are no easy solutions, and it’s no reason to stop trying. Things are imperfect, we need to try and make them as perfect as can be while keeping the wheels churning, that’s engineering 101.

iv—Practicality And God

Going back to our unstated assumptions people judge systems of ethics under:

  1. The system of ethics is to be implemented by an omnipotent being

  2. The system of ethics is to be agreed upon by all people and acted in accordance with at all times

  3. The actors implementing the system of ethics have perfect knowledge of the future

  4. The system of ethics is not to be revisited, modified, or replaced in the future

These are all… provably wrong? I mean, depends on your standard, but roughly so, at least.

(1) Can’t be “proven” wrong but there’s no proof the other way either, and the existence of omnipotence doesn’t fit with any other things we’ve observed thus far. Even if an agent could/​does exist, there’s no reason to think we could program a system of ethics into it

(2) Again, we can’t “prove” humans can’t all act in accordance with a system of ethics. But I know of no individual who of their own accord can always be consistent with their own ethics. Even if I narrow my sample to individuals with well-thought-out ethics, which are relatively intelligent and powerful. So given that we can’t solve this problem for n=1 it seems very hard to solve for n=8,000,000,000

(3) The theoretical version of this is something like “to compute the next state of the universe you need a machine that’s part of the universe which predicts its own future state, which would include the prediction itself, this is a paradox”. The applied version of this sounds something like “All the compute in the world can’t get even close to perfectly modeling a single bacteria”, and the universe has trillions of those, and they represent 1/​<basically infinity> of all things to be modeled, and these things interact.

(4) Is just… dumb? How often do you revisit and revise your theories and behavior? How often do we as a society do that? Maybe, every couple of hours. So why are we thinking about a system that holds to infinity with no changes?


I still can’t fathom why people like reasoning under these constraints.

My first reaction is that it’s the “most challenging environment” to model something in, if it works under these assumptions it always works.

But that’s just not true.

These assumptions make things easy.

Thinking about ethics (or any other philosophical issue) in the real world, with all its unknown and fuzziness and imperfections is actually much harder.

So then, using those 4 assumptions we are reasoning under constraints that are both false and impractical for the problem we are trying to model.

Indeed, the constraints make us ignore the “real” problem in implementing a system of ethics, all of which are fairly practical.

You don’t need to come up with a model that’s that good, it just needs to be… better than the Catholic Church and most NGOs. That’s a pretty low bar, like, “don’t kill or physically injure people and use at least 20% of the money for something other than aggrandizing yourself or perpetuating your growth” is enough to probably get you past that bar.

On the other hand, getting more people to donate to you, establishing a new country following your more-ethical constitution, or solving the 1001 other coordination problems at hand, those are the hard problems that we should think about and write about.


Nor do I think this is a problem related specifically to effective altruism or “nerds”. Most philosophers from Socrates onwards seem to fall prey to a version of these assumptions and it poisons their whole thinking, a poison which spews upon the philosophical tradition as a hole.

Nor is it specific to Abrahamic religions, this sort of nonsense seems even more present in thinking stemming from Buddhist traditions, for example. For all I might want to shit on them, thinkers from Christian tradition were able to pull their heads out of their asses for a sufficiently long moment to come up with the scientific method.

Maybe it’s just to embarrassing to think about the real world, so we simply gravitate towards a wish-fulfillment universe to do our thinking, and millennia of accumulated memes have made the above universe a “respectable” candidate to construct theories in. Or maybe it’s just an evolved quirk of how the minute and irrelevant part of our brain that handles symbolic thinking operates, a quick which, for some reason, was really useful at hunting mammoths or whatever, so it stuck with us.

I say it’s embarrassing to think about the real world because whenever I think about it I get ashamed of how little I can do, it’s things like “get this piece of code to run a few times faster” or “reduce the tax burden by 5%” or “convince Mary she should really pursue that research project and not become an analyst for a political committee because of slightly higher pay” or “find a slightly better alternative to our significance test for these particular datasets” or “increase AUC of this model by 0.04 or more”.

If you design and popularize a mosquito net that’s 10% cheaper to produce or 5% more efficient for the same price you’ve already saved millions of lives, most of us are not smart enough to even begin thinking about something like that, and those that are… well, it sounds so inglorious, so barbaric and insignificant, “IS THIS WHAT I’VE BEEN TRAINING FOR ALL OF THESE YEARS!?” — scream conscious symbolic cognition — “IT CAN’T BE, I NEED TO DESIGN THE AXIOMS ON WHICH GODS ARE TO SPIN THE WEB OF LIFE ITSELF!”