Should Effective Altruism be at war with North Korea?

Link post

Summary: Political constraints cause supposedly objective technocratic deliberations to adopt frames that any reasonable third party would interpret as picking a side. I explore the case of North Korea in the context of nuclear disarmament rhetoric as an illustrative example of the general trend, and claim that people and institutions can make better choices and generate better options by modeling this dynamic explicitly. In particular, Effective Altruism and academic Utilitarianism can plausibly claim to be the British Empire’s central decisionmaking mechanism, and as such, has more options than its current story can consider.

Context

I wrote to my friend Georgia in response to this Tumblr post.

Asymmetric disarmament rhetoric

Ben: It feels increasingly sketchy to me to call tiny countries surrounded by hostile regimes “threatening” for developing nuclear capacity, when US official policy for decades has been to threaten the world with nuclear genocide.

Strong recommendation to read Daniel Ellsberg’s The Doomsday Machine.

Georgia: Book review: The Doomsday Machine

So I get that the US’ nuclear policy was and probably is a nightmare that’s repeatedly skirted apocalypse. That doesn’t make North Korea’s program better.

Ben [feeling pretty sheepish, having just strongly recommended a book my friend just reviewed on her blog]: “Threatening” just seems like a really weird word for it. This isn’t about whether things cause local harm in expectation—it’s about the frame in which agents trying to organize to defend themselves are the aggressors, rather than the agent insisting on global domination.

Georgia: I agree that it’s not the best word to describe it. I do mean “threatening the global peace” or something rather than “threatening to the US as an entity.” But, I do in fact think that North Korea building nukes is pretty aggressive. (The US is too, for sure!)

Maybe North Korea would feel less need to defend itself from other large countries if it weren’t a literal dictatorship—being an oppressive dictatorship with nukes is strictly worse.

Ben: What’s the underlying thing you’re modeling, such that you need a term like “aggression” or “threatening,” and what role does it play in that model?

Georgia: Something like destabilizing to the global order and not-having-nuclear-wars, increases risk to people, makes the world more dangerous. With “aggressive” I was responding to to your “aggressors” but may have misunderstood what you meant by that.

Ben: This feels like a frame that fundamentally doesn’t care about distinguishing what I’d call aggression from what I’d call defense—if they do a thing that escalates a conflict, you use the same word for it regardless. There’s some sense in which this is the same thing as being “disagreeable” in action.

Georgia: You’re right. The regime is building nukes at least in large part because they feel threatened and as an active-defense kind of thing. This is also terrible for global stability, peace, etc.

Ben: If I try to ground out my objection to that language a bit more clearly, it’s that a focus on which agent is proximately escalating a conflict, without making distinctions about the kinds of escalation that seem like they’re about controlling others’ internal behavior vs preventing others from controlling your internal behavior is an implicit demand that everyone immediately submit completely to the dominant player.

Georgia: It’s pretty hard to make those kind of distinctions with a single word choice, but I agree that’s an important distinction.

Ben: I think this is exactly WHY agents like North Korea see the need to develop a nuclear deterrent. (Plus the dominant player does not have a great track record for safety.) Do you see how from my perspective that amounts to “North Korea should submit to US domination because there will be less fighting that way,” and why I’d find that sketchy?

Maybe not sketchy coming from a disinterested Martian, but very sketchy coming from someone in one of the social classes that benefit the most from US global dominance?

Georgia: Kind of, but I believe this in the nuclear arena in particular, not in general conflict or sociopolitical tensions or whatever. Nuclear war has some very specific dynamics and risks.

Influence and diplomacy

Ben: The obvious thing from an Effective Altruist perspective would be to try to establish diplomatic contact between Oxford EAs and the North Koreans, to see if there’s a compromise version of Utilitarianism that satisfies both parties such that North Korea is happy being folded into the Anglosphere, and then push that version of Utilitarianism in academia.

Georgia: That’s not obvious. Wait, are you proposing that?

Ben: It might not work, but “stronger AI offers weaker AI part of its utility function in exchange for conceding instead of fighting” is the obvious way for AGIs to resolve conflicts, insofar as trust can be established. (This method of resolving disputes is also probably part of why animals have sex.)

Georgia: I don’t think academic philosophy has any direct influence on like political actions. (Oh, no, you like Plato and stuff, I probably just kicked a hornet’s nest.) Slightly better odds on the Oxford EAs being able to influence political powers in some major way.

Ben: Academia has hella indirect influence, I think. I think Keynes was right when he said that “practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.” Though usually on longer timescales.

FHI is successfully positioning itself as an advisor to the UK government on AI safety.

Georgia: Yeah, they are doing some cool stuff like that, do have political ties, etc, which is why I give them better odds.

Ben: Utilitarianism is nominally moving substantial amounts of money per year, and quite a lot if you count Good Ventures being aligned with GiveWell due to Peter Singer’s recommendation.

Georgia: That’s true.

Ben: The whole QALY paradigm is based on Utilitarianism. And it seems to me like you either have to believe

(a) that this means academic Utilitarianism has been extremely influential, or

(b) the whole EA enterprise is profiting from the impression that it’s Utilitarian but then doing quite different stuff in a way that if not literally fraud is definitely a bait-and-switch.

Georgia: I’m persuaded that EA has been pretty damn influential and influenced by academic utilitarianism. Wouldn’t trying to convince EAs directly or whatever instead of routing through academia be better?

Ben: Good point, doesn’t have to be exclusively academic—you’d want a mixture of channels since some are longer-lived than others, and you don’t know which ones the North Koreans are most interested in. Money now vs power within the Anglo coordination mechanism later.

Georgia: The other half of my incredulity is that fusing your value functions does not seem like a good silver bullet for conflicts.

Ben: It worked for America, sort of. I think it’s more like, rarely tried because people aren’t thinking systematically about this stuff. Nearly no one has the kind of perspective that can do proper diplomacy, as opposed to clarity-opposing power games.

Georgia: But saying that an academic push to make a fused value function is obviously the most effective solution for a major conflict seems ridiculous on its face.

Is it coherent to model an institution as an agent?

Ben: I think the perspective in which this doesn’t work, is one that thinks modeling NK as an agent that can make decisions is fundamentally incoherent, and also that taking claims to be doing utilitarian reasoning at face value is incoherent. Either there are agents with utility functions that can and do represent their preferences, or there aren’t.

Georgia: Surely they can be both—like, conglomerations of human brains aren’t really perfectly going to follow any kind of strategy, but it can still make sense to identify entities that basically do the decisionmaking and act more-or-less in accordance to some values, and treat that as a unit

It is both true that “the North Korean regime is composed of multiple humans with their own goals and meat brains ” and that “the North Korean regime makes decisions for the country and usually follows self-preservationist decisionmaking.”

Ben:I’m not sure which mode of analysis is correct, but I am sure that doing the reconciliation to clarify what the different coherent perspectives are, is a strong step in the right direction.

Georgia: Your goal seems good!

Philosophy as perspective

Ben: Maybe EA/​Utilitarianism should side with the Anglo empire against NK, but if so, it should probably account for that choice internally, if it wants to be and be construed as a rational agent rather than a fundamentally political actor cognitively constrained by institutional loyalties.

Thanks for engaging with this—I hadn’t really thought through the concrete implications of the fact that any system of coordinated action is a “side” or agent in a decision-theoretic landscape with the potential for conflict.

That’s the conceptual connection between my sense that calling North Korea’s nukes “threatening” is mainly just shoring up America’s rhetorical position as the legitimate world empire, and my sense that reasoning about ends that doesn’t concern itself with the reproduction of the group doing the reasoning is implicitly totalitarian in a way that nearly no one actually wants.

Georgia: “With the reproduction of the group doing the reasoning”—like spreading their values/​reasoning-generators or something?

Ben: Something like that.

If you want philosopher kings to rule, you need a system adequate to keep them in power, when plenty of non-philosophers have an incentive to try to get in on the action, and then that ends up constraining most of your choices, so you don’t end up benefiting much from the philosophers’ competence!

So you build a totalitarian regime to try to hold onto this extremely fragile arrangement, and it fails anyway. The amount of narrative control they have to exert to prevent people from subverting the system by which they’re in charge ends up being huge.

(There’s some ambiguity, since part of the reason for control is education into virtue—but if you’re not doing that, there’s not really much of a point of having philosophers in charge anyway.)

I’m definitely giving you a summary run through a filter, but that’s true of all summaries, and I don’t think mine is less true than the others -just, differently slanted.

Related: ON GEOPOLITICAL DOMINATION AS A SERVICE