Link post

This is part 20 of 30 of Ham­mer­time. Click here for the in­tro.

There’s a se­ri­ous and scary phe­nomenon which Valen­tine’s re­cent posts have been touch­ing on: much of who you are only ex­ists (or is ex­pressed) in the pres­ence of other peo­ple. In the words of Bishop Berkeley, esse est per­cipi: To be is to be per­ceived. Ham­mer­time will always be an in­com­plete en­deavor un­less it is ap­plied to so­cial set­tings – there are ma­jor chunks of the psy­che only ac­cessible in such set­tings.

Up to now, Ham­mer­time has mostly been a set of tools for the in­di­vi­d­ual ra­tio­nal­ist in a so­cial vac­uum. To­day I want to talk the prob­lem of other hu­man be­ings, and how to go about de­sign­ing so­cial in­ter­ac­tions that are con­ducive to the prac­tice of in­stru­men­tal ra­tio­nal­ity.

Ham­mer­time Day 20: Friendship

Back­ground: The In­tel­li­gent So­cial Web

There’s good ev­i­dence in biol­ogy that the power of the hu­man brain largely evolved to solve ever-com­plex­ify­ing so­cial prob­lems. Much of the heavy cog­ni­tive ma­chin­ery in your head is pri­mar­ily built for and re­sponds best to so­cial in­ter­ac­tion. Brains are ex­tremely good at de­tect­ing so­cial threats and anoma­lies, at reg­u­lat­ing im­plicit sta­tus lad­ders, at read­ing body lan­guage, and at simu­lat­ing other brains.

This post is a start at the De­sign of op­ti­mal two-per­son in­ter­ac­tions.

Iter­ated Games

Ra­tion­al­ists spend a lot of time railing against the failings of causal de­ci­sion the­ory, and pro­mot­ing al­ter­na­tives that solve them. The un­com­fortable truth, how­ever, is that you will not make causal de­ci­sion the­o­rists co­op­er­ate on the pris­oner’s dilemma by throw­ing tomes of philos­o­phy at them, and many many peo­ple are causal de­ci­sion the­o­rists. Not all hope is lost though: there’s a known, albeit unglamorous, solu­tion to co­or­di­na­tion failures within the frame­work of causal de­ci­sion the­ory: iter­ated games.

Iter­a­tion is the eas­iest path to build­ing strong friend­ship: make in­ter­ac­tions longer and more reg­u­lar.

In the mid­dle of Jan­uary, I be­gan con­tact­ing friends and set­ting up reg­u­larly weekly chats. Al­most no­body re­fused. A hand­ful of in­ter­ac­tions fiz­zled out, but the ones that lasted have been un­be­liev­ably pos­i­tive. I kept ramp­ing up the num­ber of con­ver­sa­tions un­til it felt ac­tively fa­tigu­ing. To­day this habit alone al­lows me to talk to an av­er­age of one ex­tra per­son per day for an hour and a half.

Hu­man be­ings are un­be­liev­ably re­cip­ro­cal crea­tures in sta­ble long-term re­la­tion­ships. The in­cen­tives are quite ro­bust. Jor­dan Peter­son once high­lighted this with a pithy phrase about mar­riage (para­phrased): “You can’t win an ar­gu­ment against your wife if she loses. After all, you still have to live with her.”

Of course, hu­man be­ings are also stupid and per­verse enough to ig­nore the strongest in­cen­tives. How many mil­lions of life-long part­ner­ships ended with decades of mu­tual abuse? Keep your eyes open.

Con­ver­sa­tion 101

Here are three ob­ject-level ideas for hav­ing use­ful con­ver­sa­tions.

So­cratic Ducking

Rub­ber Ducking
Get­ting a per­son to act as a rub­ber duck who you talk your ideas out to in or­der to get a clear han­dle on them.
So­cratic Ducking
Aid­ing a part­ner in think­ing through an idea or solv­ing a prob­lem. Com­bines so­cratic ques­tion­ing and rub­ber duck­ing. At­tempt to offer few sug­ges­tions and thoughts while in­stead al­ter­nat­ing be­tween stim­u­lat­ing ques­tions and at­ten­tive silence. En­courage the other per­son to think through com­plex threads and think deeply about the ram­ifi­ca­tions of ideas and pos­si­ble solu­tions.

Of­ten­times there is a clear listener and talker in a con­ver­sa­tion. As the listener, fo­cus pri­mar­ily on pay­ing at­ten­tive silence and oc­ca­sion­ally ask­ing pointed or clar­ify­ing ques­tions when the con­ver­sa­tion seems to dry up. The pri­mary goal is to keep your part­ner gen­er­at­ing ideas and on track.

A friend of mine stim­u­lated a ma­jor break­through in a ses­sion of aver­sion fac­tor­ing for me by nod­ding silently the whole time, ex­cept for ut­ter­ing a sin­gle well-timed word: “try!” This en­couraged me to ex­pend the nec­es­sary men­tal effort to break through that men­tal bar­rier and cor­rectly iden­tify an aver­sion to­wards plan­ning.


The Ide­olog­i­cal Tur­ing Test is a con­cept in­vented by Amer­i­can economist Bryan Ca­plan to test whether a poli­ti­cal or ide­olog­i­cal par­ti­san cor­rectly un­der­stands the ar­gu­ments of his or her in­tel­lec­tual ad­ver­saries: the par­ti­san is in­vited to an­swer ques­tions or write an es­say pos­ing as his op­po­site num­ber. If neu­tral judges can­not tell the differ­ence be­tween the par­ti­san’s an­swers and the an­swers of the op­po­site num­ber, the can­di­date is judged to cor­rectly un­der­stand the op­pos­ing side.

In­tel­lec­tual (Ide­olog­i­cal?) Tur­ing Tests, or ITT’s, can be rather la­bo­ri­ous. The minified con­ver­sa­tional norm is: you are not al­lowed to move for­ward with an ar­gu­ment un­til you have ac­cu­rately sum­ma­rized the other per­son’s point of view to their satis­fac­tion.


Con­ver­sa­tions can get de­railed rather rapidly, and it’s a well-es­tab­lished fact that all con­ver­sa­tions af­ter mid­night will de­volve into a de­bate about con­scious­ness.

For on­line con­ver­sa­tions, I make a habit of col­lect­ing pos­si­ble tan­gents on a sheet of pa­per when they come to mind, in­stead of im­me­di­ately toss­ing them into the fray and risk­ing the en­tire cur­rent train of thought. There will always be time later for your fas­ci­nat­ing point.

Take a Yoda Timer to train the fol­low­ing TAP: when­ever a re­lated con­ver­sa­tion topic comes to mind, ask your­self whether you want to go down that rab­bit-hole.

Daily Challenge

Book a 15 or 30-minute chat with me on Cal­endly to talk about any­thing.