Imitation is the Sincerest Form of Argument

I re­cently gave a talk at Chicago Ideas Week on adapt­ing Tur­ing Tests to have bet­ter, less mind­kill-y ar­gu­ments, and this is the pre­cis for folks who would pre­fer not to sit through the video (which is available here).

Con­ven­tional Tur­ing Tests check whether a pro­gram­mer can build a con­vinc­ing fac­simile of a hu­man con­ver­sa­tion­al­ist. The test has turned out to re­veal less about ma­chine in­tel­li­gence than hu­man in­tel­li­gence. (Anger is re­ally easy to fake, since fights can end up a lit­tle more Markov chain-y, where you only need to re­ply to the most re­cent re­join­der and can ig­nore what came be­fore). Since nor­mal Tur­ing Tests made us think more about our model of hu­man con­ver­sa­tion, economist Bryan Ca­plan came up with a way to use them to make us think more use­fully about our mod­els of our en­e­mies.

After Paul Krug­man dis­par­aged Ca­plan’s brand of liber­tar­ian eco­nomics, Ca­plan challenged him to an ide­olog­i­cal Tur­ing Test, where both play­ers would be hu­man, but would be try­ing to ac­cu­rately imi­tate each other. Ca­plan and Krug­man would each an­swer ques­tions about their true be­liefs hon­estly, and then would fill out the ques­tion­aire again in per­sona in­im­ici—try­ing to guess the an­swers given by the other side. Ca­plan was will­ing to bet that he un­der­stood Krug­man’s po­si­tion well enough to mimic it, but Krug­man would be eas­ily spot­ted as a fake!Ca­plan.

Krug­man didn’t take him up on the offer, but I’ve run a cou­ple iter­a­tions of the test for my re­li­gion/​philos­o­phy blog. The first year, some of the most in­ter­est­ing re­sults were the proxy vari­ables peo­ple were us­ing, that weren’t as strong as in­di­ca­tors as the judges thought. (One Catholic coasted through to vic­tory as a faux athe­ist, since many of the athe­ist judges thought there was no way a Chris­tian would ap­pre­ci­ate the we­b­comic SMBC).

The trou­ble was, the Chris­ti­ans did a lot bet­ter, since it turned out I had writ­ten bor­ing, easy to guess ques­tions for the true and faux athe­ists. The sec­ond year, I wrote weirder ques­tions, and the an­swers were a lot more di­verse and sur­pris­ing (and a num­ber of the athe­ist par­ti­ci­pants called out each other as fakes or just plain wrong, since we’d got­ten past the shal­low ques­tions from year one, and there’s a lot of philo­soph­i­cal di­ver­sity within athe­ism).

The ex­er­cise made peo­ple get cu­ri­ous about what it was their op­po­nents ac­tu­ally thought and why. It helped peo­ple spot in­cor­rect stereo­types of an op­pos­ing side and faultlines they’d been ig­nor­ing within their own. Per­son­ally, (and ac­cord­ing to other par­ti­ci­pants) it helped me have an ar­gu­ment less an­tag­o­nis­ti­cally. In­stead of just try­ing to find enough of a weak point to dis­com­fit my op­po­nent, I was try­ing to build up a model of how they thought, and I needed their help to do it.

Tak­ing a calm, in­quisi­tive look at an op­po­nent’s po­si­tion might teach me that my po­si­tion is wrong, or has a gap I need to in­ves­ti­gate. But even if my op­po­nent is just as wrong as zer seemed, there’s still a benefit to me. Hav­ing a re­ally de­tailed, ac­cu­rate model of zer po­si­tion may help me show them why it’s wrong, since now I can see ex­actly where it rasps against re­al­ity. And even if my con­ver­sa­tion isn’t helpful to them, it’s in­ter­est­ing for me to see what they were miss­ing. I may be cor­rect in this par­tic­u­lar ar­gu­ment, but the odds are good that I share the ra­tio­nal­ist weak-point that is keep­ing them from notic­ing the er­ror. I’d like to be able to see it more clearly so I can try and spot it in my own thought. (Think of this as the shift from “How the hell can you be so dumb?!” to “How the hell can you be so dumb?”).

When I get an­gry, I’m satis­fied when I beat my in­ter­locu­tor. When I get cu­ri­ous, I’m only satis­fied when I learn some­thing new.