The Pink Sparkly Ball Thing (Use unique, non-obvious terms for nuanced concepts)

Nam­ing things! Nam­ing things is hard. It’s been claimed that it’s one of the hard­est parts of com­puter sci­ence. Now, this might sound sur­pris­ing, but one of my fa­voritely named con­cepts is Kah­ne­man’s Sys­tem 1 and Sys­tem 2.

I want you to pause for a few sec­onds and con­sider what comes to mind when you read just the bolded phrase above.

Got it?

If you’re fa­mil­iar with the con­cepts of S1 and S2, then you prob­a­bly have a pretty rich sense of what I’m talk­ing about. Or per­haps you have a par­tial no­tion: “I think it was about...” or some­thing. If you’ve never been ex­posed to the con­cept, then you prob­a­bly have no idea.

Now, Kah­ne­man could have rea­son­ably named these sys­tems lots of other things, like “emo­tional cog­ni­tion” and “ra­tio­nal cog­ni­tion”… or “fast, au­to­matic think­ing” and “slow, de­liber­ate think­ing”. But now imag­ine that it had been “emo­tional and ra­tio­nal cog­ni­tion” that Kah­ne­man had writ­ten about, and the effect on the ear­lier para­graph.

It would be about the same for those who had stud­ied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the con­cepts) would be re­minded of that one par­tic­u­lar con­trast be­tween S1 and S2 (emo­tion/​rea­son) and be primed to think that was the main one, for­get­ting about all of the other pa­ram­e­ters that that dis­tinc­tion seeks to de­scribe. Those who had never heard of Kah­ne­man’s re­search might as­sume that they ba­si­cally knew what the terms were about, be­cause they already have a sense of what emo­tion and rea­son are.

This is re­lated to a con­cept known as over­shad­ow­ing, when a ver­bal de­scrip­tion of a scene can cause eye­wit­nesses to mis­re­mem­ber the de­tails of the scene. Words can dis­rupt lots of other things too, in­clud­ing our abil­ity to think clearly about con­cepts.

An ex­am­ple of this in ac­tion is Ask and Guess Cul­ture model (and later Tell, and Re­veal). Peo­ple who are try­ing to use the mod­els be­come hugely dis­tracted by the par­tic­u­lar names of the en­tities in the model, which only have a rough bear­ing on the nu­anced el­e­ments of these cul­tures. Even af­ter think­ing about this a ton my­self, I still found my­self ac­ci­den­tally as­sum­ing that ques­tions an Ask Cul­ture thing.

So “Sys­tem 1” and “Sys­tem 2″ have sev­eral ad­van­tages:

  • they don’t im­me­di­ately and eas­ily seem like you already un­der­stand them if you haven’t been ex­posed to that par­tic­u­lar source

  • they don’t over­shadow peo­ple who do know them into as­sum­ing that the names con­tain the most im­por­tant features

Another ex­am­ple that I think is de­cent (though not as clean as S1/​S2) is Scott Alexan­der’s use of Red Tribe and Blue Tribe to re­fer to cul­ture clusters that roughly cor­re­spond to right and left poli­ti­cal lean­ings in the USA. (For read­ers in most other coun­tries: the US has their col­ors back­wards… blue is left wing and red is right wing.) The col­ors make it rea­son­ably easy to as­so­ci­ate and re­mem­ber, but un­less you’ve read the post (or talked with some­one who has) you won’t nec­es­sar­ily know the jar­gon.

Jar­gon vs in-jokes

All of the ex­am­ples I’ve listed above are es­sen­tially jar­gon—ter­minol­ogy that isn’t available to the gen­eral pub­lic. I’m gen­er­ally in favour of jar­gon! If you want to pre­cisely and con­cisely con­vey a con­cept that doesn’t already have its own word, then you have two op­tions.

“Coin­ing new jar­gon words (ne­ol­o­gisms) is an al­ter­na­tive to for­mu­lat­ing un­usu­ally pre­cise mean­ings of com­monly-heard words when one needs to con­vey a spe­cific mean­ing.” — fubarobfusco on a LW thread

Do­ing the lat­ter is of­ten safe when you’re in a tech­ni­cal con­text. “En­ergy” is a col­lo­quial term, but it also has a pre­cise tech­ni­cal mean­ing. Since in tech­ni­cal con­texts, peo­ple will tend as­sume that all such terms have tech­ni­cal mean­ings (or even learn said mean­ings early on) there is lit­tle risk of con­fu­sion here. Usu­ally.

I’m go­ing to make a case that it’s worth treat­ing nu­anced con­cepts like in-jokes: don’t make the mean­ing feel like it’s in the term. Now, I’m not sold that this is a good idea all the time, but it seems to have some merit to it. I’m in­ter­ested in where it works and where it doesn’t; don’t take this ar­ti­cle to sug­gest I think it’s unilat­er­ally good. Let’s jam on where it’s good.

Com­mu­ni­ca­tion is built on shared un­der­stand­ing. Much of this comes from the com­mons: al­most all of words you’re read­ing in this blog posts are not words that you and I had to guaran­tee we both un­der­stood with each other, be­fore I could write the post. Some­times, blog posts (or books, lec­tures, etc) will con­tain defi­ni­tions, or will try to tri­an­gu­late a con­cept with ex­am­ples. The au­thor hopes that the reader will in­deed have a similar han­dle on the word they’re us­ing af­ter read­ing the defi­ni­tion. (The reader may not, of course. Also they they might think they do. Or be con­fused.)

When you have the chance to in­ter­act with some­one in real-time, 1-on-1, you can of­ten gauge their un­der­stand­ing be­cause they’ll try to para­phrase the thing, and you can usu­ally tell if the thing that they say is the kind of thing some­one who un­der­stood would say. This is great, be­cause then you can feel con­fi­dent that you can use that con­cept as a build­ing block in ex­plain­ing fur­ther con­cepts.

One com­mon failure mode of com­mu­ni­ca­tion is when peo­ple as­sume that they’re us­ing the same build­ing blocks as each other, when in fact, they’re us­ing im­por­tantly differ­ent con­cepts. The is the is­sue that ra­tio­nal­ist taboo is de­signed to com­bat: for­bid use of a con­found­ing word and force the con­ver­sa­tion­al­ists to build the con­cept up from com­po­nent parts again.

Another way to re­duce the oc­cur­rence of this sort of thing is to use jar­gon and in-jokes, be­cause then the per­son is go­ing to draw a blank if they don’t already have the shared un­der­stand­ing. Since you had to be there, and if you weren’t, some­thing key is ob­vi­ously miss­ing.

I once had a long con­ver­sa­tion with some­one, and we ended up us­ing a lot of the ob­jects we had with us as props when ex­plain­ing cer­tain con­cepts. This had the cu­ri­ous effect that if we wanted to refer­ence our shared un­der­stand­ing of the ear­lier con­cept, we could re­fer to the ob­ject and it be­came re­ally clear that it was our shared un­der­stand­ing we were refer­enc­ing, not some more gen­eral thing. So I could say “the ba­nana thing” to re­fer to him hav­ing ex­plored the no­tion that evil­ness is a prop­erty of the map, not the ter­ri­tory, by re­mark­ing that a ba­nana can’t be evil but that we can think it evil.

The im­por­tant thing here is that it felt like it was eas­ier to point clearly at that topic by say­ing “the ba­nana thing”, be­cause we both knew what that was and didn’t need to ac­ci­den­tally over­shadow it, by say­ing “the ob­jects aren’t evil thing” which might even­tu­ally get turned into a catch­phrase that seems to con­tain mean­ing but never ac­tu­ally con­tained the crit­i­cal in­sight.

This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hang­ing out with a par­tic­u­lar per­son or small group. When you have a con­cept to ex­plore, you’d grab an un­used toy that seemed to suit it de­cently well, and then you’d ges­ture with it while ex­plain­ing the con­cept. Then later you could re­fer to “the pink sparkly ball thing” or sim­ply “this thing” while ges­tur­ing at the ball. Pos­si­bly, the other per­son wouldn’t re­mem­ber, or not im­me­di­ately. But if they did, you could be much more con­fi­dent that you were on the same page. It’s a kind of shared mnemonic han­dle.

In some ways, this is already a nat­u­ral part of hu­man com­mu­ni­ca­tion: I re­call years ago talk­ing to a friend and say­ing “oh, it’s like the thing we talked about on my porch last sum­mer” and she im­me­di­ately knew what I meant. I’m ba­si­cally propos­ing to take it fur­ther, by us­ing props or by in­vent­ing new words.

Un­for­tu­nately, terms of­ten end up los­ing their nu­ance, for var­i­ous rea­sons. Some­times this hap­pens be­cause the small con­cept they were try­ing to point at hap­pens to be sur­rounded by a vac­uum, so it ex­pands. Other times be­cause of shib­bo­leths and peo­ple want­ing to use in-group words. Or the words are used playfully and po­et­i­cally, for hu­mor pur­poses, which then makes it less clear that they once had a pre­cise mean­ing.

This sug­gests there might be a kind of ter­minolog­i­cal in­fla­tion thing go­ing on. And to the ex­tent that sig­nal­ling by us­ing jar­gon is anti-in­duc­tive, that’ll dilute things too.

I think if you’re try­ing to think com­plex thoughts, it’s worth de­vel­op­ing spe­cial­ized lan­guage, not just with groups of peo­ple, but even in 1-on-1 con­texts. Of course, pay at­ten­tion so you don’t use terms with peo­ple who to­tally don’t know them.

And this, this de­vel­op­ing of shared lan­guage be­yond what’s strictly nec­es­sary but still worth­while… this, per­haps, we might call the pink sparkly ball thing.

(this ar­ti­cle cross­posted from mal­col­mo­