Purchase Optics and Results Separately

Sometimes there’s a solution that’s otherwise superior, but people do not like it for “irrational” reasons. This second-order effect makes the solution worse in practice. This concept is mostly a parallel to Yudkowsky’s Purchase Fuzzies and Utilons Separately.

For instance, it makes no sense to have a “no man left behind” policy in war, except that it’s really useful for motivational purposes. It leads to more dead people than it saves in the long term. Sometimes we waste both money and the underlying utility bought because of this, for instance when forcibly extending lifespan of terminally ill patients. Many EA cause areas also exhibit these dynamics.

When considering such problems, it’s often useful to disambiguate between optics and results. There’s a recursive dependence here; the results require good-enough optics to work out. Sometimes optics can be bought cheaper than any other marginal improvement in results. Propaganda, for instance, is remarkably effective. Be wary of Chesterton’s fence; sometimes the thing is not liked for a good reason.

If you’re getting good results with methods that have bad optics, it often makes sense to do so discreetly. “All publicity is bad publicity”; it creates unwanted optimization pressure. The abolition of the death penalty against public opinion is an interesting example of this. In general, democracy forces representatives into this dilemma.

Often there’s also a softer alternative that will lead to similar results. For instance, vice taxes have been rather effective at reducing smoking, without invoking the image of limiting personal choice. Public opinion can also be changed, but that’s a lot of work. Decades of traffic safety campaigning have clearly been quite important in shaping attitudes about seatbelts and such.

The concept itself is somewhat prone to the dynamics it describes. If you get caught doing this you’ll be (correctly) accused of deception. If you’re open about it, it doesn’t work and you’ll still be (incorrectly) accused of deception. This makes legibility expensive. If you’re forced to buy both results and optics at the same place, it limits viable methods.

This also applies on a personal level, but I’m reluctant to provide any examples for the above-mentioned reasons. I have written on mitigation methods in Perhaps you should suspect me as well and The Aura of A Dark Lord. These approaches might not be appropriate for you, in which case I’d suggest The Elephant in the Brain, which sadly has the potential downside of making you more aware of how you deceive others and thus worse at it.