Should EA’s be Superrational cooperators?

Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn’t just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.

If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.

If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.

You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.

I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.

In game theoretic terms, Mark was a Cooperational agent.

  1. Altruistic—MaxOther

  2. Cooperational—MaxSum

  3. Individualist—MaxOwn

  4. Equalitarian—MinDiff

  5. Competitive—MaxDiff

  6. Aggressive—MinOther

Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it “altruism” is because even the most parochial EA’s care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.

Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the “A” in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.

A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.

The question then is,

How should a consequentialist act locally?

The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.

My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.