Universal Love Integration Test: Hitler

I’m still not satisfied with this post, but thought I’d ship it since I refer to the concept a fair amount. I write this more as “someone who feels some kernel of univeral-love-shaped thing”, but, like, i dunno man i’m not a love expert.


I think “love” means “To care about someone such that they are an extension of yourself (at least to some degree).” This includes caring about the things they care about on their own terms (but can still include enforcing boundaries, preventing them from harming others, etc).

I think “love” matters most when it’s backed up by actual actions. If you merely “feel like you care in your heart”, but don’t take any actions about that, you’re kind of kidding yourself. (I think there is still some kind of interesting relational stance you can have that doesn’t route through action, but it’s relatively weaksauce as love goes)

What, then, would “Universal Love” mean? I can’t possibly love everyone in a way that grounds out in action. I nonetheless have an intuition that universal love is important to me. Is it real? Does it make any sense?

I think part of what makes it real is having an intention that if I had more resources, I would try to take concrete actions to both help, and connect with, everyone.

In this post I explore this in more detail, and check “okay how actually do I relate to, say, Hitler? Do I love him?”.

My worldview was shaped by hippies and nerds. This is basically a historical accident – I could have easily been raised by a different combination of cultures. But here I am.

One facet of this worldview is “everyone deserves compassion/​empathy”. And, I think, my ideal self loves everyone.

(I don’t think everyone else’s ideal self necessarily loves everyone. This is just one particular relational stance you can have to the world. But, it’s mine)

What exactly does this mean though? Does it makes sense?

I can’t create a whole new worldview from scratch, but I can look for inconsistencies in my existing worldview, and notice when it either conflicts with itself, or conflicts with reality, and figure out new pieces of worldview that seem good according to my current values. Over the past 10 years or so, my worldview has gotten a healthy dose of game theory, and practical experience with various community organizing, worldsaving efforts, etc.

I aspire towards a robust morality, which includes having compassion for everyone, while still holding them accountable for their actions. i.e similar to sort of thing theunitofcaring blog talks about:

I don’t know how to give everyone an environment in which they’ll thrive. It’s probably absurdly hard, in lots of cases it is, in practical terms, impossible. But I basically always feel like it’s the point, and that anything else is missing the point. There are people whose brains are permanently-given-our-current-capabilities stuck functioning the way my brain functioned when I was very sick. And I encounter, sometimes, “individual responsibility” people who say “lazy, unproductive, unreliable people who choose not to work choose their circumstances; if they go to bed hungry then, yes, they deserve to be hungry; what else could ‘deserve’ possibly mean?” They don’t think they’re talking to me; I have a six-figure tech job and do it well and save for retirement and pay my bills, just like them. But I did not deserve to be hungry when I was sick, either, and I would not deserve to be hungry if I’d never gotten better.

What else could ‘deserve’ possibly mean? When I use it, I am pointing at the ‘give everyone an environment in which they’ll thrive’ thing. People with terminal cancer deserve a cure even though right now we don’t have one; deserving isn’t a claim about what we have, but about what we would want to give out if we had it. And so, to me, horrible people who abuse others all the time deserve an environment in which they would thrive and not be able to abuse others, even if we can’t provide one and don’t even have any idea what it would look like and sensibly are prioritizing other people who don’t abuse others. If you have experiences, you deserve good experiences; if you have feelings, you deserve happy feelings; if you want to be loved, you are worthy of love. You flourishing is a moral good; everybody flourishing is in fact the only moral good, the entire thing morality is for. Your actions should have consequences, sure, and we should figure out how to build a world where those consequences are ones that you can handle, and where you can amend the things that you do wrong. When you hurt people, that can change what “you thriving” looks like, because part of thriving is fixing, and growing from, things you have done wrong; but nothing you do can change that it is good for you to thrive.

I reject that I ever deserved to starve, and so I reject that anyone, ever, deserves to starve. I reject that I ever deserved to suffer, and so I reject that anyone, ever, deserves to suffer. Happiness is good. Your happiness is good. And without a single exception anywhere I want you to thrive.

The rest of this post is me somewhat autistically explore what I want out of Universal Love, and then running the obvious integration test to check that it’s actually universal, with “do I love Hitler, tho?”.

Musings on Game Theoretically Sound Love

I want to distinguish: “Wanting people to thrive”, “Empathy”, and “Love.”

“Steering the world such that more people to thrive” is the practical action I actually care about. “Wanting people to thrive” is a pretty obvious cognitive strategy that steers towards that directly. Empathy and love are more specific memetic/​cognitive implementations, which (due to evolutionary and memetic history), I have come to find particularly meaningful.

Importantly, neither love nor empathy nor even “wanting people to thrive” are necessary nor sufficient to actually cause people to thrive. Well intentioned empathetic people have been known to pave the way to hell, and selfish business owners can help people tremendously.

Empathy vs Love

I feel pretty confident that “universal empathy” is an important part of my unfolded values.

I’m somewhat less confident about “love.” Love is also a not-super-well-defined word, so for purposes of this post I’d say: Love is when I choose to care about someone/​something in a way that is… an extension of myself. My utility function cares directly about their utility function.

Universal vs Unconditional

You could call this “compassion for everyone” Universal Love (in that it is for everyone) and Unconditional Love (in that it’s not gated on them behaving a particular way). This post will slightly conflate these things (I think it’s not that sensical to have Universal Love that is not also Unconditional? Clearly there are lots of people for whom the conditions of conditional love don’t apply, and as soon as you start making exceptions it quickly stops being universal)

Problems with Empathy

There’s an obvious problem with empathy. Many people have a naive conception of empathy that results in them becoming a doormat. They see someone who needs help, they drop everything to help them. They do this over and over and forget that they need to maintain slack to handle emergencies or notice subtle things, or just they just never get around to doing the things they value for themselves.

Or: they don’t immediately drop everything to help, but their empathy eats up an attentional cost, and attention is one of your most important resources.

Seeing this failure mode, some people come to see empathy or love as a weakness. I think empathy is both important to my values as well as practically useful, and I think there are ways of having empathy without being a doormat.

I think there exists Game Theoretically Sound Empathy. I also think there are conceptions of empathy that are… Industrial grade. By which I mean, Elon Musk or Steve Jobs could have adopted and become better rather than worse at their job. (My conception of Musk/​Jobs are kinda non-empathetic assholes, who are nonetheless great at their job, and I think randomly shoving empathy into the mix without being deliberate about it would be more likely to fuck up the process than help if not done carefully).

I’m not Elon Musk so I’m not sure my current conception is actually any good, but it seems like there is at least a pareto frontier of empathy that is further ahead than most people’s conception.


I’ve had (and witnessed) some confusion about unconditional love. There is a certain type of love that is absolutely conditional, and absolutely should be conditional, and yes it can be taken away from you and it’s reasonable to be a scared of that. This is the type of love where people will actually want to be in your life, and give you material support, and hang out with you.

But I feel like there’s an important sense in which I can still love people that I don’t want to be part of my life. I think I even love people who I think society should probably give either life-imprisonment or capital punishment (in our current world). If my child turns out to be a serial killer who also is constantly manipulating me, such that even visiting them in jail would be psychologically harmful to me… well, do I love that child or not?

I think so. But a cynical part of me asks “yeah, but do you love them in a way that isn’t bullshit?”.

It’s easy to love people in a way that doesn’t require any effort or action on your part. But, should anyone give a shit about you loving them that way? I have a friend who, when they hear polyamorous people say “love is infinite”, say “oh, so you mean the kind of love that’s important to you is the kind of love where you don’t put actual work in and I don’t really get anything significant out of it?”. (Other polyamorist people have said “love is infinite but time is finite,” which seems more realistic to me)


I think this is a pretty serious question, and the answer might be “no.” It is entirely possible that this is a distorted narrative that’s important to my self image and is actually nonsense. But I don’t think it is. Or, rather, if it is, I dunno man I think I am made out of this narrative enough that, at least for me personally, if the narrative has flaws, my job is to fix the flaws rather than discard it.

But, I think it’s not that complicated to resolve this. I think the main way I’d operationalize it is: I hold an intention that, if I had more resources, I would try to both help, and connect emotionally with, people who it’s not currently safe or worth to connect with right now.

I might have a family member or friend who is emotionally manipulative. I have a long history with them, there are still parts of them I enjoy. I find that them being around me is fucking up my life. I’ve tried communicating with them about it, and they haven’t changed. So I start distancing from them, or cutting them off.

To say “I still love them”, means that if I see opportunities to help them thrive, or to connect emotionally with them, at low (or “reasonable”) cost to myself, I will take those opportunities. If, later on, I gain more emotional skills such that I don’t feel manipulated by them, I might choose to let them back into my life even if they haven’t changed. (I might still keep them at some distance to protect other people, who don’t have the emotional/​social resilience)

I won’t necessarily prioritize this highly compared to lots of other things I value. But it notably makes the list of things to prioritize someday, and I still hold the relational stance of this person in particular mattering to me.

At more extreme levels (i.e. my son grows up to be a serial killer), I might do things like visit him in jail.

One thing I do, even for people who are not part of my life at all (i.e. serial-killer son is executed, or manipulative-aunt is just too abusive to be worth dealing with), is keeping an eye out for ways to help their agency in ways that keep them in my memory, and which don’t require interacting with them. (i.e. I know my aunt cared particularly about X, and I think X was a good thing, so on the margin I look for ways to help X)

Loving Hitler

So… it so happens I have hope for Great Transhumanist Future, where among other things, we can attempt to run ancestor simulations with at least some degree of fidelity.

I have slightly different answers to “what’s up with loving Hitler” in the world where ancestor simulations are real, and worlds where we merely can remember people’s stories. I think the principles are similar.

If I had infinite energy and time, well, I would eventually

a) think for like a thousand subjective years to make sure I’m not philosophically confused about love, game theory, morality and my overall goals.

b) I’d learn as much as I could about Hitler, to model the real version of him as best I could, rather than a vague cartoon of him (with caveats about prioritizing him alongside all the other nigh-infinite people to spend time modeling)

And then… well I’m not sure what exactly comes next because it depends on how steps A and B play out. But, the sort of thing I imagine happening next would be something like: run a simulation of Hitler, at various times in his life, at least one of which was towards the end right before he killed himself. (If we don’t have real-quality ancestor simulations, “simulate him as best I can in my head” is probably the best I got)

To the simulated-end-of-life Hitler, I’d imagine appearing in that simulation. I don’t know exactly what I’d say and what situation I’d have wanted to create (see again step #1: think about it For a thousand years), but it might be something like:

“Hey man. So, um, I think you fucked up here. You did, indeed, fuck up so hard that you don’t get to hang out with the other ancestor simulations, and even though I have infinite energy I’m not giving you a personal high resolution paradise simulation. I’m gonna give you a chill, mediocre but serviceable sim-world that is good enough to give you space to think and reflect and decide what you want.

“I’m gonna be here to listen to you as much as you need, and talk to you about things if that’s helpful. And meanwhile, no matter what you decide… man, you were a human.

“You had some combination of bad genes or wrong childhood or bad decisions, and you hurt people on an industrial scale. And you don’t get to have all the things you want until you’ve somehow processed why that isn’t okay, and actually learned to be better.

“But, you were still a human. In the endless void of the primordial before times, you were one of the patterns that came to feel desire and drive. You told stories that led people to genocide and that’s terrible, but you told stories, and had feelings, and thought about things. You had struggles and hurt. You probably died scared and alone. And there is something about that is precious, which I choose to care about no matter how many terrible things you did.

“Your actions put you into the backlog. I don’t try to help people like you until all the good people are helped. I’m actually wary of even writing this blogpost about you because it feels like it’s unfairly rewarding you (above other less famous people) simply for the extent of your cruelty.

“But I would eventually help you, if I could. And while I don’t spend much empathetic cognition on you outside of weird thought experiments, I do hold you in the relational stance that I hold all humans in.”

...this may not super reassuring to a hypothetical Real Hitler right before he commits suicide, if he were to somehow know that I would look upon him in such a stance.

But, it currently feels to me like this is real/​meaningful. I could choose to not hold this stance at all towards Hitler, and apathetically let him fade into nothing.