Should Effective Altruism be at war with North Korea?

Link post

Sum­mary: Poli­ti­cal con­straints cause sup­pos­edly ob­jec­tive tech­no­cratic de­liber­a­tions to adopt frames that any rea­son­able third party would in­ter­pret as pick­ing a side. I ex­plore the case of North Korea in the con­text of nu­clear disar­ma­ment rhetoric as an illus­tra­tive ex­am­ple of the gen­eral trend, and claim that peo­ple and in­sti­tu­tions can make bet­ter choices and gen­er­ate bet­ter op­tions by mod­el­ing this dy­namic ex­plic­itly. In par­tic­u­lar, Effec­tive Altru­ism and aca­demic Utili­tar­i­anism can plau­si­bly claim to be the Bri­tish Em­pire’s cen­tral de­ci­sion­mak­ing mechanism, and as such, has more op­tions than its cur­rent story can con­sider.

Context

I wrote to my friend Ge­or­gia in re­sponse to this Tum­blr post.

Asym­met­ric disar­ma­ment rhetoric

Ben: It feels in­creas­ingly sketchy to me to call tiny coun­tries sur­rounded by hos­tile regimes “threat­en­ing” for de­vel­op­ing nu­clear ca­pac­ity, when US offi­cial policy for decades has been to threaten the world with nu­clear geno­cide.

Strong recom­men­da­tion to read Daniel Ells­berg’s The Dooms­day Ma­chine.

Ge­or­gia: Book re­view: The Dooms­day Machine

So I get that the US’ nu­clear policy was and prob­a­bly is a night­mare that’s re­peat­edly skirted apoc­a­lypse. That doesn’t make North Korea’s pro­gram bet­ter.

Ben [feel­ing pretty sheep­ish, hav­ing just strongly recom­mended a book my friend just re­viewed on her blog]: “Threat­en­ing” just seems like a re­ally weird word for it. This isn’t about whether things cause lo­cal harm in ex­pec­ta­tion—it’s about the frame in which agents try­ing to or­ga­nize to defend them­selves are the ag­gres­sors, rather than the agent in­sist­ing on global dom­i­na­tion.

Ge­or­gia: I agree that it’s not the best word to de­scribe it. I do mean “threat­en­ing the global peace” or some­thing rather than “threat­en­ing to the US as an en­tity.” But, I do in fact think that North Korea build­ing nukes is pretty ag­gres­sive. (The US is too, for sure!)

Maybe North Korea would feel less need to defend it­self from other large coun­tries if it weren’t a literal dic­ta­tor­ship—be­ing an op­pres­sive dic­ta­tor­ship with nukes is strictly worse.

Ben: What’s the un­der­ly­ing thing you’re mod­el­ing, such that you need a term like “ag­gres­sion” or “threat­en­ing,” and what role does it play in that model?

Ge­or­gia: Some­thing like desta­bi­liz­ing to the global or­der and not-hav­ing-nu­clear-wars, in­creases risk to peo­ple, makes the world more dan­ger­ous. With “ag­gres­sive” I was re­spond­ing to to your “ag­gres­sors” but may have mi­s­un­der­stood what you meant by that.

Ben: This feels like a frame that fun­da­men­tally doesn’t care about dis­t­in­guish­ing what I’d call ag­gres­sion from what I’d call defense—if they do a thing that es­ca­lates a con­flict, you use the same word for it re­gard­less. There’s some sense in which this is the same thing as be­ing “dis­agree­able” in ac­tion.

Ge­or­gia: You’re right. The regime is build­ing nukes at least in large part be­cause they feel threat­ened and as an ac­tive-defense kind of thing. This is also ter­rible for global sta­bil­ity, peace, etc.

Ben: If I try to ground out my ob­jec­tion to that lan­guage a bit more clearly, it’s that a fo­cus on which agent is prox­i­mately es­ca­lat­ing a con­flict, with­out mak­ing dis­tinc­tions about the kinds of es­ca­la­tion that seem like they’re about con­trol­ling oth­ers’ in­ter­nal be­hav­ior vs pre­vent­ing oth­ers from con­trol­ling your in­ter­nal be­hav­ior is an im­plicit de­mand that ev­ery­one im­me­di­ately sub­mit com­pletely to the dom­i­nant player.

Ge­or­gia: It’s pretty hard to make those kind of dis­tinc­tions with a sin­gle word choice, but I agree that’s an im­por­tant dis­tinc­tion.

Ben: I think this is ex­actly WHY agents like North Korea see the need to de­velop a nu­clear de­ter­rent. (Plus the dom­i­nant player does not have a great track record for safety.) Do you see how from my per­spec­tive that amounts to “North Korea should sub­mit to US dom­i­na­tion be­cause there will be less fight­ing that way,” and why I’d find that sketchy?

Maybe not sketchy com­ing from a dis­in­ter­ested Mar­tian, but very sketchy com­ing from some­one in one of the so­cial classes that benefit the most from US global dom­i­nance?

Ge­or­gia: Kind of, but I be­lieve this in the nu­clear arena in par­tic­u­lar, not in gen­eral con­flict or so­ciopoli­ti­cal ten­sions or what­ever. Nu­clear war has some very spe­cific dy­nam­ics and risks.

In­fluence and diplomacy

Ben: The ob­vi­ous thing from an Effec­tive Altru­ist per­spec­tive would be to try to es­tab­lish diplo­matic con­tact be­tween Oxford EAs and the North Kore­ans, to see if there’s a com­pro­mise ver­sion of Utili­tar­i­anism that satis­fies both par­ties such that North Korea is happy be­ing folded into the An­glo­sphere, and then push that ver­sion of Utili­tar­i­anism in academia.

Ge­or­gia: That’s not ob­vi­ous. Wait, are you propos­ing that?

Ben: It might not work, but “stronger AI offers weaker AI part of its util­ity func­tion in ex­change for con­ced­ing in­stead of fight­ing” is the ob­vi­ous way for AGIs to re­solve con­flicts, in­so­far as trust can be es­tab­lished. (This method of re­solv­ing dis­putes is also prob­a­bly part of why an­i­mals have sex.)

Ge­or­gia: I don’t think aca­demic philos­o­phy has any di­rect in­fluence on like poli­ti­cal ac­tions. (Oh, no, you like Plato and stuff, I prob­a­bly just kicked a hor­net’s nest.) Slightly bet­ter odds on the Oxford EAs be­ing able to in­fluence poli­ti­cal pow­ers in some ma­jor way.

Ben: Academia has hella in­di­rect in­fluence, I think. I think Keynes was right when he said that “prac­ti­cal men who be­lieve them­selves to be quite ex­empt from any in­tel­lec­tual in­fluence, are usu­ally the slaves of some de­funct economist. Mad­men in au­thor­ity, who hear voices in the air, are dis­till­ing their frenzy from some aca­demic scrib­bler of a few years back.” Though usu­ally on longer timescales.

FHI is suc­cess­fully po­si­tion­ing it­self as an ad­vi­sor to the UK gov­ern­ment on AI safety.

Ge­or­gia: Yeah, they are do­ing some cool stuff like that, do have poli­ti­cal ties, etc, which is why I give them bet­ter odds.

Ben: Utili­tar­i­anism is nom­i­nally mov­ing sub­stan­tial amounts of money per year, and quite a lot if you count Good Ven­tures be­ing al­igned with GiveWell due to Peter Singer’s recom­men­da­tion.

Ge­or­gia: That’s true.

Ben: The whole QALY paradigm is based on Utili­tar­i­anism. And it seems to me like you ei­ther have to believe

(a) that this means aca­demic Utili­tar­i­anism has been ex­tremely in­fluen­tial, or

(b) the whole EA en­ter­prise is prof­it­ing from the im­pres­sion that it’s Utili­tar­ian but then do­ing quite differ­ent stuff in a way that if not liter­ally fraud is definitely a bait-and-switch.

Ge­or­gia: I’m per­suaded that EA has been pretty damn in­fluen­tial and in­fluenced by aca­demic util­i­tar­i­anism. Wouldn’t try­ing to con­vince EAs di­rectly or what­ever in­stead of rout­ing through academia be bet­ter?

Ben: Good point, doesn’t have to be ex­clu­sively aca­demic—you’d want a mix­ture of chan­nels since some are longer-lived than oth­ers, and you don’t know which ones the North Kore­ans are most in­ter­ested in. Money now vs power within the An­glo co­or­di­na­tion mechanism later.

Ge­or­gia: The other half of my in­cre­dulity is that fus­ing your value func­tions does not seem like a good silver bul­let for con­flicts.

Ben: It worked for Amer­ica, sort of. I think it’s more like, rarely tried be­cause peo­ple aren’t think­ing sys­tem­at­i­cally about this stuff. Nearly no one has the kind of per­spec­tive that can do proper diplo­macy, as op­posed to clar­ity-op­pos­ing power games.

Ge­or­gia: But say­ing that an aca­demic push to make a fused value func­tion is ob­vi­ously the most effec­tive solu­tion for a ma­jor con­flict seems ridicu­lous on its face.

Is it co­her­ent to model an in­sti­tu­tion as an agent?

Ben: I think the per­spec­tive in which this doesn’t work, is one that thinks mod­el­ing NK as an agent that can make de­ci­sions is fun­da­men­tally in­co­her­ent, and also that tak­ing claims to be do­ing util­i­tar­ian rea­son­ing at face value is in­co­her­ent. Either there are agents with util­ity func­tions that can and do rep­re­sent their prefer­ences, or there aren’t.

Ge­or­gia: Surely they can be both—like, con­glomer­a­tions of hu­man brains aren’t re­ally perfectly go­ing to fol­low any kind of strat­egy, but it can still make sense to iden­tify en­tities that ba­si­cally do the de­ci­sion­mak­ing and act more-or-less in ac­cor­dance to some val­ues, and treat that as a unit

It is both true that “the North Korean regime is com­posed of mul­ti­ple hu­mans with their own goals and meat brains ” and that “the North Korean regime makes de­ci­sions for the coun­try and usu­ally fol­lows self-preser­va­tion­ist de­ci­sion­mak­ing.”

Ben:I’m not sure which mode of anal­y­sis is cor­rect, but I am sure that do­ing the rec­on­cili­a­tion to clar­ify what the differ­ent co­her­ent per­spec­tives are, is a strong step in the right di­rec­tion.

Ge­or­gia: Your goal seems good!

Philos­o­phy as perspective

Ben: Maybe EA/​Utili­tar­i­anism should side with the An­glo em­pire against NK, but if so, it should prob­a­bly ac­count for that choice in­ter­nally, if it wants to be and be con­strued as a ra­tio­nal agent rather than a fun­da­men­tally poli­ti­cal ac­tor cog­ni­tively con­strained by in­sti­tu­tional loy­alties.

Thanks for en­gag­ing with this—I hadn’t re­ally thought through the con­crete im­pli­ca­tions of the fact that any sys­tem of co­or­di­nated ac­tion is a “side” or agent in a de­ci­sion-the­o­retic land­scape with the po­ten­tial for con­flict.

That’s the con­cep­tual con­nec­tion be­tween my sense that call­ing North Korea’s nukes “threat­en­ing” is mainly just shoring up Amer­ica’s rhetor­i­cal po­si­tion as the le­gi­t­i­mate world em­pire, and my sense that rea­son­ing about ends that doesn’t con­cern it­self with the re­pro­duc­tion of the group do­ing the rea­son­ing is im­plic­itly to­tal­i­tar­ian in a way that nearly no one ac­tu­ally wants.

Ge­or­gia: “With the re­pro­duc­tion of the group do­ing the rea­son­ing”—like spread­ing their val­ues/​rea­son­ing-gen­er­a­tors or some­thing?

Ben: Some­thing like that.

If you want philoso­pher kings to rule, you need a sys­tem ad­e­quate to keep them in power, when plenty of non-philoso­phers have an in­cen­tive to try to get in on the ac­tion, and then that ends up con­strain­ing most of your choices, so you don’t end up benefit­ing much from the philoso­phers’ com­pe­tence!

So you build a to­tal­i­tar­ian regime to try to hold onto this ex­tremely frag­ile ar­range­ment, and it fails any­way. The amount of nar­ra­tive con­trol they have to ex­ert to pre­vent peo­ple from sub­vert­ing the sys­tem by which they’re in charge ends up be­ing huge.

(There’s some am­bi­guity, since part of the rea­son for con­trol is ed­u­ca­tion into virtue—but if you’re not do­ing that, there’s not re­ally much of a point of hav­ing philoso­phers in charge any­way.)

I’m definitely giv­ing you a sum­mary run through a filter, but that’s true of all sum­maries, and I don’t think mine is less true than the oth­ers -just, differ­ently slanted.

Re­lated: ON GEOPOLITICAL DOMINATION AS A SERVICE