Public Positions and Private Guts

In this post I lay out a model of be­liefs and com­mu­ni­ca­tion that iden­tify two types of things we might think of as ‘be­liefs,’ how they are com­mu­ni­cated be­tween peo­ple, how they are com­mu­ni­cated within peo­ple, and what this might im­ply about in­tel­lec­tual progress in some im­por­tant fields. As back­ground, Ter­ence Tao has a blog post de­scribing three stages of math­e­mat­ics: pre-rigor­ous, rigor­ous, and post-rigor­ous. It’s only about two pages; the rest of this post will as­sume you’ve read it. Ben Pace has a blog post de­scribing how to dis­cuss the mod­els gen­er­at­ing an out­put, rather than the out­put it­self, which is also short and is re­lated, but has im­por­tant dis­tinc­tions from the model out­lined here.

[Note: the con­cept for this post comes from a talk given by Anna Sala­mon, and I some­times in­struct for CFAR, but the pre­sen­ta­tion in this post should be taken to only rep­re­sent my views.]

If a man will be­gin with cer­tain­ties, he shall end with doubts, but if he will be con­tent to be­gin with doubts he shall end in cer­tain­ties. -- Fran­cis Bacon

FORMAL COMMUNICATION

Prob­a­bly the dom­i­nant model of con­ver­sa­tions among philoso­phers to­day is Robert Stal­naker’s. (Here’s an in­tro­duc­tion.) A con­ver­sa­tion has a defined set of in­ter­locu­tors, and some shared con­text, and speech acts add state­ments to the con­text, typ­i­cally by as­sert­ing a new fact.

I’m not an ex­pert in con­tem­po­rary philos­o­phy, and so from here on out this is my ex­ten­sion of this view that I’ll re­fer to this view as ‘for­mal.’ Per­haps this ex­ten­sion is en­tirely prece­dented, or per­haps it’s con­tro­ver­sial. My view fo­cuses on situ­a­tions where log­i­cal om­ni­science is not as­sumed, and thus sim­ply point­ing out the con­clu­sion that arises from com­bin­ing facts can count as such an as­ser­tion. Proper speech con­sid­ers this and takes in­fer­en­tial dis­tance into ac­count; my speech acts should be deriv­able from our shared con­text or an un­sur­pris­ing jump from them. Both new log­i­cal facts and en­vi­ron­men­tal facts count as adding in­for­ma­tion to the shared con­text. That I am cur­rently wear­ing brown socks while writ­ing this part of the post is not some­thing you could de­rive from our shared con­text, but is nev­er­the­less ‘un­sur­pris­ing.’

It’s easy to see how a math­e­mat­i­cal proof might fit into this frame­work. We be­gin with some ax­ioms and sup­po­si­tions, and then we com­pute con­clu­sions that fol­low from those premises, and even­tu­ally we end up at the the­o­rem that was to be proved.

If I make a speech act that’s too far of a stretch—ei­ther be­cause it dis­agrees with some­thing in the con­text (or your per­sonal ex­pe­rience), or is just not eas­ily deriv­able from the com­mon con­text—then the au­di­ence should ‘beep’ and I should back up and jus­tify the speech act. A step in the proof that doesn’t ob­vi­ously fol­low means I need to ex­pand the proof to make it clear how I got from A to B, or how a pair of state­ments that ap­pear con­tra­dic­tory is in fact not con­tra­dic­tory. (“Ah, by X I meant the re­stricted sub­set X’, such that this coun­terex­am­ple is ex­cluded; my mis­take.”)

This style of con­ver­sa­tion seems to be min­i­miz­ing sur­prise on the low level; from mo­ment to mo­ment, ac­tions are be­ing taken in a way that views jus­tifi­ca­tion and val­i­da­tion by in­de­pen­dent sources as core con­straints. What is this good for? In­ter­est­ingly, the care­ful avoidance of sur­prises on the low level per­mits sur­prises on the high level, as a con­clu­sion reached by air­tight logic can be as trust­wor­thy as the premises of that logic, re­gard­less of how bizarre the con­clu­sion seems. A plan fleshed out with enough de­tail that it can be in­de­pen­dently re­con­structed by many differ­ent peo­ple is a plan that can scale to a large or­ga­ni­za­tion. The body of sci­en­tific knowl­edge is com­mu­ni­cated mostly this way; Nul­lius in verba re­quires this sort of care­ful com­mu­ni­ca­tion be­cause it bans the leaps one might oth­er­wise make.

PUBLIC POSITIONS

One way to model com­mu­ni­ca­tion is a func­tion that takes ob­jects of a cer­tain type and tries to recre­ate them in an­other place. A tele­phone takes sound waves and at­tempts to recre­ate them el­se­where, whereas an in­stant mes­sen­ger takes text strings and at­tempts to recre­ate them el­se­where. So con­ju­gate to the com­mu­ni­ca­tion method­ol­ogy is ‘the thing that can be com­mu­ni­cated by this method­ol­ogy’; I’m go­ing to define ‘pub­lic po­si­tions’ as the sort of be­liefs that are amenable to com­mu­ni­ca­tion through ‘for­mal com­mu­ni­ca­tion’ (this style where you con­struct con­clu­sions out of a chain of sim­ple ad­di­tions to the pre-ex­ist­ing con­text).The ‘pub­lic’ bit em­pha­sizes that they’re op­ti­mized for jus­tifi­ca­tion or pre­sen­ta­tion; many things I be­lieve don’t count as pub­lic po­si­tions be­cause I can’t reach them through this sort of for­mal com­mu­ni­ca­tion. For ex­am­ple, I find the smell of or­anges highly un­pleas­ant; I can com­mu­ni­cate that fact about my prefer­ences through for­mal com­mu­ni­ca­tion but can’t com­mu­ni­cate the prefer­ence it­self through for­mal com­mu­ni­ca­tion. The ‘po­si­tions’ bit em­pha­sizes that they are defen­si­ble and leg­ible; you can ‘know where I stand’ on a par­tic­u­lar topic.

PRIVATE GUTS

I’m go­ing to call a differ­ent sort of be­lief one’s ‘pri­vate guts.’ By ‘guts,’ I’m point­ing to­wards the his­tor­i­cal causes of a be­lief (like the par­tic­u­lar bit of my bio­chem­istry that causes to smell dis­taste­ful), or to the sense of a ‘gut feel­ing.’ By pri­vate, I’m point­ing to­wards the fact that this is of­ten opaque or not shaped like some­thing that’s com­mu­ni­ca­ble, rather than some­thing de­liber­ately hid­den. If you’re fa­mil­iar with Gendlin’s Fo­cus­ing, ‘felt senses’ are an ex­am­ple of pri­vate guts.

What are pri­vate guts good for? As far as I can tell, lizards prob­a­bly don’t have pub­lic po­si­tions, but they prob­a­bly do have pri­vate guts. That sug­gests those guts are good for pre­dict­ing things about the world and achiev­ing de­sir­able world states, as well as be­ing one of the chan­nels by which the de­sir­a­bil­ity of world states is com­mu­ni­cated in­side a mind. It seems re­lated to many sorts of ‘em­bod­ied knowl­edge’, like how to walk, which is not un­der­stood from first prin­ci­ples or in an ab­stract way, or habits, like ad­jec­tive or­der in English. A neu­ral net­work that ‘knows’ how to clas­sify images of cats, but doesn’t know how it knows (or is ‘un­in­ter­pretable’), seems like an ex­am­ple of this. “Why is this image a cat?” → “Well, be­cause when you do lots of mul­ti­pli­ca­tion and ad­di­tion and non­lin­ear trans­forms on pixel in­ten­si­ties, it ends up hav­ing a higher cat-num­ber than dog-num­ber.” This seems similar to gut senses that are difficult to ar­tic­u­late; “why do you think the elec­tion will go this way in­stead of that way?” → “Well, be­cause when you do lots of mul­ti­pli­ca­tion and ad­di­tion and non­lin­ear trans­forms on en­vi­ron­men­tal facts, it ends up hav­ing a higher A-num­ber than B-num­ber.” Pri­vate guts also seem to cap­ture a cat­e­gory of amor­phous vi­sions; a startup can rarely write a for­mal proof that their pro­ject will suc­ceed (gen­er­ally, if they could, the com­pany would already ex­ist). The postri­gor­ous math­e­mat­i­cian’s hunch falls into this cat­e­gory, which I’ll elab­o­rate on later.

There are now two sorts of in­ter­est­ing com­mu­ni­ca­tion to talk about: the pro­cess that co­heres pub­lic po­si­tions and pri­vate guts within a sin­gle in­di­vi­d­ual, and the pro­cess that com­mu­ni­cates pri­vate guts across in­di­vi­d­u­als.

COHERENCE, FOCUSING, AND SCIENCE

Much of CFAR’s fo­cus, and that of the ra­tio­nal­ity pro­ject in gen­eral, has in­volved tak­ing peo­ple who are ex­tremely so­phis­ti­cated at for­mal com­mu­ni­ca­tion and de­vel­op­ing their pub­lic po­si­tions, and get­ting them to no­tice and listen to their pri­vate guts. An ex­am­ple, origi­nally from Ju­lia Galef, is the ‘agenty duck.’ Imag­ine a duck whose head points in one di­rec­tion (“I want to get a PhD!”) and whose feet are pointed in an­other (mys­te­ri­ously, this duck never wants to work on their dis­ser­ta­tion). Many re­sponses to this sort of in­trap­er­sonal con­flict seem mal­adap­tive; much bet­ter for the duck to have head and feet pointed in the same di­rec­tion, re­gard­less of which di­rec­tion that is. An in­di­vi­d­ual run­ning a co­her­ence pro­cess that in­te­grates the knowl­edge of the ‘head’ and ‘feet’, or the pub­lic po­si­tions and the pri­vate guts, will end up more knowl­edge­able and func­tional than an in­di­vi­d­ual that ig­nores one to fo­cus on the other.

Dis­cov­er­ing the right co­her­ence pro­cess is an on­go­ing pro­ject, and even if I knew it as a pub­lic po­si­tion it would be too long for this post. So I will merely leave some poin­t­ers and move on. First, the pri­vate guts seem highly train­able by ex­pe­rience, es­pe­cially through care­fully grad­u­ated ex­po­sure. Se­cond, Fo­cus­ing and re­lated tech­niques (like In­ter­nal Dou­ble Crux) seem quite effec­tive at search­ing through the space of ar­tic­u­la­ble /​ un­der­stand­able sen­tences or con­cepts in or­der to find those that res­onate with the pri­vate guts, draw­ing forth ar­tic­u­la­tion from the inar­tic­u­late.

It’s also worth em­pha­siz­ing the way in which sci­ence de­pends on such a co­her­ence pro­cess. The ‘sci­en­tific method’ can be viewed in this fash­ion: hy­pothe­ses can be wildly con­structed through any method, be­cause hy­pothe­ses are sim­ply pro­pos­als rather than truth-state­ments; only hy­pothe­ses that sur­vive the filter of con­tact with re­al­ity through ex­per­i­men­ta­tion grad­u­ate to full facts, at which point their ori­gin is ir­rele­vant, be it in­duc­tion, a lucky guess, or the un­con­scious mind pro­cess­ing some­thing in a dream.

Similarly for math­e­mat­i­ci­ans, ac­cord­ing to Tao. The tran­si­tion from pre-rigor­ous math­e­mat­ics to rigor­ous math­e­mat­ics cor­re­sponds to be­ing able to see for­mal com­mu­ni­ca­tion and pub­lic po­si­tions as types, and learn­ing to trust them over per­sua­sion and opinions. The tran­si­tion from rigor­ous math­e­mat­ics to post-rigor­ous math­e­mat­ics cor­re­sponds to hav­ing trained one’s pri­vate guts such that they line up with the un­der­ly­ing math­e­mat­i­cal re­al­ity well enough that they gen­er­ate fruit­ful hy­pothe­ses.

Con­sider au­to­matic the­o­rem provers. One va­ri­ety be­gins with a set of ax­ioms, in­clud­ing the negated con­clu­sion, and then grad­u­ally ex­pands out­wards, seek­ing to find a con­tra­dic­tion (and thus prove that the de­sired con­clu­sion fol­lows from the other ax­ioms). Every step of the way pro­ceeds ac­cord­ing to the for­mal com­mu­ni­ca­tion style, and ev­ery propo­si­tion in the proof state can be jus­tified through trac­ing the his­tory of com­bi­na­tions of propo­si­tions that led from the ini­tial ax­ioms to that propo­si­tion. But the pro­cess is un­guided, re­li­ant on the swift­ness of com­puter logic to han­dle the mas­sive ex­plo­sion of propo­si­tions, al­most all of which will be ir­rele­vant to the fi­nal proof. The hu­man math­e­mat­i­cian in­stead has some amor­phous sense of what the proof will look like, sketch­ing a leaky ar­gu­ment that is not cor­rect in the de­tails, but which is cor­rectable. Some­thing in­ter­est­ing is go­ing on in the pro­cess that gen­er­ates cor­rectible ar­gu­ments, per­haps even more in­ter­est­ing than what’s go­ing on in the pro­cesses that triv­ially gen­er­ate cor­rect ar­gu­ments by gen­er­at­ing all pos­si­ble ar­gu­ments and then fil­ter­ing.

STARTUPS, DOUBLE CRUX, AND CIRCLING

Some­how, peo­ple are some­times able to link up their pri­vate guts with each other. This is con­sid­er­ably more fraught than link­ing up pub­lic po­si­tions; po­si­tions are of a type that is op­ti­mized for ver­ifi­a­bil­ity and re­con­struc­tion, whereas in­ter­nal ex­pe­riences, in gen­eral, are not. Even if we’re eat­ing the same cake, how would we even check that our in­ter­nal ex­pe­rience of eat­ing the cake is similar? What about some­thing sim­pler, like see­ing the same color?

While the ab­strac­tion of for­mal con­ver­sa­tion is fairly sim­ple, it’s still ob­vi­ous that there are many skills re­lated to cor­rect ar­gu­men­ta­tion. Similarly, there seems to be a whole fam­ily of skills re­lated to sync­ing up pri­vate guts, and rather than teach­ing those skills, this sec­tion will again by a poin­ter to where those skills could be learned or trained. Learn­ing how to re­pro­duce mu­sic is re­lated to learn­ing how to par­ti­ci­pate in jam ses­sions, but the lat­ter is a much closer fit to this sort of com­mu­ni­ca­tion.

The ex­pe­rience of star­tups is that small teams are best, pri­mar­ily be­cause of the costs of co­or­di­na­tive com­mu­ni­ca­tion. Star­tups are of­ten chas­ing an amor­phous, rapidly chang­ing tar­get; a team that’s able to quickly ori­ent in the same di­rec­tion and move to­gether, or trust in the guts of each other rather than re­quiring elab­o­rate proofs, will of­ten perform bet­ter.

While Dou­ble Crux can gen­er­ate a crisp tree of log­i­cal de­duc­tions from fac­tual dis­agree­ments, it of­ten in­stead ex­poses con­flict­ing in­tu­itions or in­ter­pre­ta­tions. While for­mal com­mu­ni­ca­tion in­volves a speaker op­ti­miz­ing over speech acts to jointly min­i­mize sur­prise and max­i­mize dis­tance to­wards their goal, dou­ble crux in­stead in­volves both par­ties in the op­ti­miza­tion, and of­ten causes them to seek sur­prises. A crux is some­thing that would change my mind, and I ex­pose my cruxes in case you dis­agree with them, seek­ing to change my mind as quickly as pos­si­ble.

Cruxes also re­spect the his­tor­i­cal causes of be­liefs; when I say “my crux for X is Y,” I am not say­ing that Y should cause you to be­lieve X, only that not-Y would cause me to be­lieve not-X. This weaker filter means many more state­ments are per­mis­si­ble, and my spe­cific epistemic state can be ad­dressed, rather than play­ing a min­i­max game by which all pos­si­ble in­ter­locu­tors would be pinned down by the truth. In Stal­nake­rian lan­guage, rather than need­ing to emit state­ments that are un­der­stand­able and jus­tifi­able by the com­mon con­text, I only need the weaker re­stric­tion that those state­ments are un­der­stand­able in the com­mon con­text and jus­tifi­able in my pri­vate con­text.

Cir­cling is also be­yond the scope of this post, ex­cept as a poin­ter. It seems rele­vant as a po­ten­tial av­enue for de­liber­ate prac­tice in un­der­stand­ing and con­nect­ing to the sub­jec­tive ex­pe­rience of oth­ers in a way that per­haps fa­cil­i­tates this sort of con­ver­sa­tion.

CONCLUSION

As men­tioned, this post is seek­ing to set out a ty­pol­ogy, and per­haps crys­tal­lize some con­cepts. But why think these con­cepts are use­ful?

Pri­mar­ily, be­cause this seems to be re­lated to the way in which ra­tio­nal­ists differ from other com­mu­ni­ties with similar in­ter­ests, or from their prior selves be­fore com­ing a ra­tio­nal­ist, in a way that seems re­lated to the differ­ence be­tween postri­gor­ous math­e­mat­i­ci­ans and rigor­ous math­e­mat­i­ci­ans. Se­con­dar­ily, be­cause many con­tem­po­rary is­sues of great prac­ti­cal im­por­tance re­quire cor­rectly guess­ing mat­ters that are not set­tled. Fi­nan­cial ex­am­ples are easy (“If I buy bit­coin now, will it be worth more when I sell it by more than a nor­mal rate of eco­nomic re­turn?”), but longevity in­ter­ven­tions have a similar prob­lem (“know­ing whether or not this works for hu­mans will take a hu­man life­time to figure out, but by that point it might be too late for me. Should I do it now?”), and it seems nearly im­pos­si­ble to rea­son cor­rectly about ex­is­ten­tial risks with­out re­li­ance on pri­vate guts (and thus on meth­ods to tune and com­mu­ni­cate those guts).