Defense against discourse

Link post

So, some writer named Cathy O’Neil wrote about fu­tur­ists’ opinions about AI risk. This piece fo­cused on fu­tur­ists as so­cial groups with differ­ent in­cen­tives, and didn’t re­ally en­gage with the con­tent of their ar­gu­ments. In­stead, she points out con­sid­er­a­tions like this:

First up: the peo­ple who be­lieve in the sin­gu­lar­ity and are not wor­ried about it. […] Th­ese fu­tur­ists are ready and will­ing to in­stall hard­ware in their brains be­cause, as they are mostly young or mid­dle-age white men, they have never been op­pressed.

She doesn’t en­gage with the con­tent of their ar­gu­ments about the fu­ture. I used to find this sort of thing in­ex­pli­ca­ble and an­noy­ing. Now I just find it sad but rea­son­able.

O’Neil is op­er­at­ing un­der the as­sump­tion that the de­no­ta­tive con­tent of the fu­tur­ists’ ar­gu­ments is not rele­vant, ex­cept in­so­far as it af­fects the en­ac­tive con­tent of their speech. In other words, their ide­ol­ogy is part of a pro­cess of coal­i­tion for­ma­tion, and tak­ing it se­ri­ously is for suck­ers.

AI and ad hominem

Scott Alexan­der of Slate Star Codex re­cently com­plained about O’Neil’s writ­ing:

It pur­ports to ex­plain what we should think about the fu­ture, but never makes a real ar­gu­ment for it. It starts by sug­gest­ing there are two im­por­tant axes on which fu­tur­ists can differ: op­ti­mism vs. pes­simism, and be­lief in a sin­gu­lar­ity. So you can end up with utopian sin­gu­lar­i­tar­i­ans, dystopian sin­gu­lar­i­tar­i­ans, utopian in­cre­men­tal­ists, and dystopian in­cre­men­tal­ists. We know the first three groups are wrong, be­cause many of their mem­bers are “young or mid­dle-age white men” who “have never been op­pressed”. On the other hand, the last group con­tains “ma­jor­ity women, gay men, and peo­ple of color”. There­fore, the last group is right, there will be no sin­gu­lar­ity, and the fu­ture will be bad.

[…]

The au­thor never even be­gins to give any ar­gu­ment about why the fu­ture will be good or bad, or why a sin­gu­lar­ity might or might not hap­pen. I’m not sure she even re­al­izes this is an op­tion, or the sort of thing some peo­ple might think rele­vant.

Scott doesn’t have a solu­tion to the prob­lem, but he’s tak­ing the right first step—try­ing to cre­ate com­mon knowl­edge about the prob­lem, and call­ing for oth­ers to do the same:

I wish ig­nor­ing this kind of thing was an op­tion, but this is how our cul­ture re­lates to things now. It seems im­por­tant to men­tion that, to have it out in the open, so that peo­ple who turn out their noses at re­spond­ing to this kind of thing don’t wake up one morn­ing and find them­selves boxed in. And if you’ve got to call out crappy non-rea­son­ing some­time, then meh, this ar­ti­cle seems as good an ex­am­ple as any.

Scott’s in­ter­pre­ta­tion seems ba­si­cally ac­cu­rate, as far as it goes. It’s true that O’Neil doesn’t en­gage with the con­tent of fu­tur­ists’ ar­gu­ments. It’s true that this is a prob­lem.

The thing is, per­haps she’s right not to en­gage with the con­tent of fu­tur­ists’ ar­gu­ments. After all, as Scott pointed out years ago (and I re­it­er­ated more re­cently), when the sin­gle most promi­nent AI risk or­ga­ni­za­tion ini­tially an­nounced its mis­sion, it was a mis­sion that ba­si­cally 100% of cred­ible ar­gu­ments about AI risk im­ply to be the ex­act wrong thing. If you had as­sumed that the con­tent of fu­tur­ists’ ar­gu­ments about AI risk would be a good guide to the ac­tions taken as a re­sult, you would quite of­ten be badly mis­taken.

Of course, maybe you dis­be­lieve the mis­sion state­ment in­stead of the fu­tur­ists’ ar­gu­ments. Or maybe you be­lieve both, but dis­be­lieve the claim that OpenAI is work­ing on AI risk rele­vant things. Any­how you slice it, you have to dis­miss some of the offi­cial com­mu­ni­ca­tion as false­hood, by some­one who is in a po­si­tion to know bet­ter.

So, why is it so hard to talk about this?

World of ac­tors, world of scribes

The im­me­di­ately prior Slate Star Codex post, Differ­ent Wor­lds, ar­gued that if some­one’s ba­sic world view seems ob­vi­ously wrong to you based on all of your per­sonal ex­pe­rience, maybe their ex­pe­rience is re­ally differ­ent. In an­other Slate Star codex post, ti­tled Might Peo­ple on the In­ter­net Some­times Lie?, Scott de­scribed how difficult he finds it to con­sider the hy­poth­e­sis that some­one is ly­ing, de­spite strong rea­son to be­lieve that ly­ing is com­mon.

Let’s com­bine these in­sights.

Scott lives in a world in which many peo­ple—the most in­ter­est­ing ones—are ba­si­cally tel­ling the truth. They care about the con­tent of ar­gu­ments, and are will­ing to make ma­jor life changes based on ex­plicit rea­son­ing. In short, he’s a mem­ber of the scribe caste. O’Neil lives in ac­tor-world, in which words are pri­mar­ily used as com­mands, or coal­i­tion-build­ing nar­ra­tives.

If Scott thinks that pay­ing at­ten­tion to the con­tents of ar­gu­ments is a good epistemic strat­egy, and the writer he’s com­plain­ing about thinks that it’s a bad strat­egy, this sug­gests an op­por­tu­nity for peo­ple like Scott to make in­fer­ences about what other peo­ple’s very differ­ent life ex­pe­riences are like. (I worked through an ex­am­ple of this my­self in my post about locker room talk.)

It now seems to me like the ex­pe­rience of the vast ma­jor­ity of peo­ple in our so­ciety is that when some­one is mak­ing ab­stract ar­gu­ments, they are more likely to be play­ing coal­i­tional poli­tics, than try­ing to trans­mit in­for­ma­tion about the struc­ture of the world.

Clever arguers

For this rea­son, I noted with in­ter­est the im­pli­ca­tions of an ex­change in the com­ments to Jes­sica Tay­lor’s re­cent Agent Foun­da­tions post on au­topoietic sys­tems and AI al­ign­ment. Paul Chris­ti­ano and Wei Dai con­sid­ered the im­pli­ca­tions of clever ar­guers, who might be able to make su­per­hu­manly per­sua­sive ar­gu­ments for ar­bi­trary points of view, such that a se­cure in­ter­net browser might re­fuse to dis­play ar­gu­ments from un­trusted sources with­out proper screen­ing.

Wei Dai writes:

I’m en­vi­sion­ing that in the fu­ture there will also be sys­tems where you can in­put any con­clu­sion that you want to ar­gue (in­clud­ing moral con­clu­sions) and the tar­get au­di­ence, and the sys­tem will give you the most con­vinc­ing ar­gu­ments for it. At that point peo­ple won’t be able to par­ti­ci­pate in any on­line (or offline for that mat­ter) dis­cus­sions with­out risk­ing their ob­ject-level val­ues be­ing hi­jacked.

Chris­ti­ano re­sponds:

It seems quite plau­si­ble that we’ll live to see a world where it’s con­sid­ered dicey for your browser to un­crit­i­cally dis­play sen­tences writ­ten by an un­trusted party.

What if most peo­ple already live in that world? A world in which tak­ing ar­gu­ments at face value is not a ca­pac­ity-en­hanc­ing tool, but a se­cu­rity vuln­er­a­bil­ity? Without trusted filters, would they not dis­miss high­falutin ar­gu­ments out of hand, and fo­cus on whether the per­son mak­ing the ar­gu­ment seems friendly, or un­friendly, us­ing hard to fake group-af­fili­a­tion sig­nals? This bears a sub­stan­tial re­sem­blance to the be­hav­ior Scott was com­plain­ing about. As he para­phrases:

We know the first three groups are wrong, be­cause many of their mem­bers are “young or mid­dle-age white men” who “have never been op­pressed”. On the other hand, the last group con­tains “ma­jor­ity women, gay men, and peo­ple of color”. There­fore, the last group is right, there will be no sin­gu­lar­ity, and the fu­ture will be bad.

Trans­lated prop­erly, this sim­ply means, “There are four pos­si­ble be­liefs to hold on this sub­ject. The first three are held by par­ties we have rea­son to dis­trust, but the fourth is held by mem­bers of our coal­i­tion. There­fore, we should in­cor­po­rate the ide­ol­ogy of the fourth group into our nar­ra­tive.”

This is ad­mirably dis­junc­tive rea­son­ing. It is also re­ally, re­ally sad. It is al­most a fully gen­eral defense against dis­course. It’s also not some­thing I ex­pect we can im­prove by brow­beat­ing peo­ple, or sneer­ing at them for not un­der­stand­ing how ar­gu­ments work. The sad fact is that peo­ple wouldn’t have these defenses up if it didn’t make sense to them to do so.

When I read Scott’s com­plaints, I was per­suaded that O’Neil was fun­da­men­tally con­fused. But then I clicked through to her piece, I was shocked at how good it was. (To be fair, Scott did a very good job low­er­ing my ex­pec­ta­tions.) She ex­plains her fo­cus quite ex­plic­itly:

And al­though it can be fun to mock them for their silly sound­ing and overtly re­li­gious pre­dic­tions, we should take fu­tur­ists se­ri­ously. Be­cause at the heart of the fu­tur­ism move­ment lies money, in­fluence, poli­ti­cal power, and ac­cess to the al­gorithms that in­creas­ingly rule our pri­vate, poli­ti­cal, and pro­fes­sional lives.

Google, IBM, Ford, and the Depart­ment of Defense all em­ploy fu­tur­ists. And I am my­self a fu­tur­ist. But I have no­ticed deep di­vi­sions and dis­agree­ments within the field, which has led me, be­low, to chart the four ba­sic “types” of fu­tur­ists. My hope is that by bet­ter un­der­stand­ing the mo­ti­va­tions and back­grounds of the peo­ple in­volved—how­ever un­scien­tifi­cally—we can bet­ter pre­pare our­selves for the up­com­ing poli­ti­cal strug­gle over whose nar­ra­tive of the fu­ture we should fight for: tech oli­garchs that want to own fly­ing cars and live for­ever, or gig econ­omy work­ers that want to some­day have af­ford­able health care.

I agree with Scott that the con­tent of fu­tur­ists’ ar­gu­ments mat­ters, and that it has to be okay to en­gage with that some­where. But it also has to be okay to en­gage with the so­cial con­text of fu­tur­ists’ ar­gu­ments, and an ar­ti­cle that speci­fi­cally tells you it’s about that seems like the most proso­cial and scribe-friendly pos­si­ble way to en­gage in that sort of dis­cus­sion. If we’re go­ing to whine about that, then in effect we’re just ask­ing peo­ple to shut up and pre­tend that fu­tur­ist nar­ra­tives aren’t be­ing used as shib­bo­leths to build coal­i­tions. That’s dishon­est.

Most peo­ple in tra­di­tional scribe rolls are not proper scribes, but a fancy sort of stan­dard-bearer. If we re­spond to peo­ple dis­play­ing the ap­pro­pri­ate amount of dis­trust by tak­ing offense—if we in­sist that they spend time listen­ing to our ar­gu­ments sim­ply be­cause we’re scribes—then we’re col­lab­o­rat­ing with the de­cep­tion. If we re­ally are more trust­wor­thy, we should be able to send costly sig­nals to that effect. The right thing to do is to try to figure out whether we can cred­ibly sig­nal that we are ac­tu­ally trust­wor­thy, by means of chan­nels that have not yet been com­pro­mised.

And, of course, to ac­tu­ally be­come trust­wor­thy. I’m still work­ing on that one.

The walls have already been breached. The bar­bar­ians are sack­ing the city. No­body likes your bar­bar­ian Hal­loween cos­tume.

Re­lated: Clue­less World vs Loser World, Anatomy of a Bubble