SotW: Be Specific

(The Ex­er­cise Prize se­ries of posts is the Cen­ter for Ap­plied Ra­tion­al­ity ask­ing for help in­vent­ing ex­er­cises that can teach cog­ni­tive skills. The difficulty is com­ing up with ex­er­cises in­ter­est­ing enough, with a high enough he­do­nic re­turn, that peo­ple ac­tu­ally do them and re­mem­ber them; this of­ten in­volves stand­ing up and perform­ing ac­tions, or in­ter­act­ing with other peo­ple, not just work­ing alone with an ex­er­cise book­let and a pen­cil. We offer prizes of $50 for any sug­ges­tion we de­cide to test, and $500 for any sug­ges­tion we de­cide to adopt. This prize also ex­tends to LW meetup ac­tivi­ties and good ideas for ver­ify­ing that a skill has been ac­quired. See here for de­tails.)


Ex­er­cise Prize: Be Specific

Dur­ing YCom­bi­na­tor’s Startup School 2011, Paul Gra­ham and Harj Tag­ger did “office hours” on­stage. One pair of en­trepreneurs were do­ing a match­mak­ing (dat­ing) startup, and Paul and Harj were try­ing to figure out what their startup did, ex­actly—for ex­am­ple, what their startup could do that the ex­ist­ing low-tech solu­tion couldn’t. (Video.)

Harj: Low-tech like, you know, just like word of mouth, tel­ling some­one “hey, you should like, meet up with my friend” or “we’re get­ting drinks, why don’t you come along?” Like, what can the soft­ware do that’s speci­fi­cally bet­ter than that?

En­trepreneur: I think that our soft­ware speci­fi­cally is pro­vid­ing the bet­ter con­nec­tions for peo­ple, um...

Paul: Pro­vid­ing the bet­ter con­nec­tions for peo­ple...?

En­trepreneur: I mean, one way you can think about it, I don’t know if this is the right an­swer, but… there’s a lot of things that are hap­pen­ing in real life that they’re try­ing to mimic on­line, maybe that’s not the cor­rect way to… Look at it like this: to give them an on­line tool to also do this, like they’re already do­ing in real life, maybe they could reach, uh ex­pand their reach through the on­line web­site.

This had been hap­pen­ing with most of the star­tups Paul and Harj were in­ter­ro­gat­ing—they just could not seem to provide a cus­tomer use-case—and I couldn’t stand it any more; which is why at this point I whispered au­dibly enough for a few nearby peo­ple to hear, “Be spe­cific! Be spe­cific!”

A mo­ment later, on stage:

Paul: Hm. Not very spe­cific.

I got some strange looks from the peo­ple sit­ting next to me.

I hope this pro­vides some back­ground for my guess that around half of Paul Gra­ham’s ad­van­tage is based on years of in­cu­ba­tor ex­pe­rience, and the other half is un­usual ra­tio­nal­ity skills of the sort that the Cen­ter for Modern Ra­tion­al­ity is try­ing to figure out how to teach. Ob­vi­ously this is only a very rough con­jec­ture. But you can see the ba­sis for the hope that—af­ter a fair amount more work—we’ll be able to offer a 2-day course for YCom­bi­na­tor en­trepreneurs that elimi­nates 50% of the over­head from their con­ver­sa­tions with Paul Gra­ham.

(Also, note how this post starts off with a spe­cific ex­am­ple—an in­stance of the con­crete-ab­stract writ­ing pat­tern in which you state the ex­am­ple first and the gen­er­al­iza­tion af­ter­ward. This is one of the most com­mon bits of non­fic­tion writ­ing ad­vice I dis­pense: “Open with the con­crete ex­am­ple, not the ab­stract ex­pla­na­tion!”)

The­o­ret­i­cal back­ground:

S. I. Hayakawa once gave this illus­tra­tion of the “lad­der of ab­strac­tion”, and in par­tic­u­lar, the differ­ence be­tween go­ing up or down:

“What is meant by the word red?”
“It’s a color.”
”What’s a color?”
“Why, it’s a qual­ity things have.”
”What’s a qual­ity?”

vs.

“What is meant by the word red?”
″Well, the next time you see some cars stopped at an in­ter­sec­tion, look at the traf­fic light fac­ing them. Also, you might go to the fire de­part­ment and see how their trucks are painted.”

“Red is a color” is mov­ing up the lad­der; “color” is a su­per­cat­e­gory of red. All things which are red, have col­ors; but not all things which have col­ors, are red. And similarly, if you look at a spe­cific firetruck, that firetruck is a red thing, but there are also many other red things which are not that firetruck.

What is true of one ap­ple may not be true of an­other ap­ple; sup­pose ap­ple1 weighs 100 grams and is slightly green in some places, and ap­ple2 weighs 200 grams and is en­tirely dark-red. You can say more truths about ap­ple2, like “ap­ple2 is dark red”, then you can say that is true of all ap­ples. (For more on this point see The Virtue of Nar­row­ness.)

Thus, it may be eas­ier to men­tally pic­ture “a firetruck” than “some­thing red”—“firetruck” de­scribes a nar­rower sec­tion of Thingspace, so you’re less likely to get lost along the way.

S. I. Hayakawa called this the lad­der of ab­strac­tion. I’m not sure if un­der­stand­ing the fol­low­ing sec­tion will re­ally help with the skill of Be­ing Spe­cific, or help any­one con­struct ex­er­cises for the skill of be­ing spe­cific. But a bet­ter the­o­ret­i­cal un­der­stand­ing does some­times prove use­ful. So I will now digress to ex­plain that ab­strac­tion isn’t re­ally a lad­der, but a lat­tice.

Let’s illus­trate this us­ing a clas­sic ex­am­ple from the field of ma­chine learn­ing. Sup­pose that Days have three prop­er­ties:

  • Weather: {Sunny, Cloudy, Rainy}

  • Tem­per­a­ture: {Cool, Hot}

  • Timing: {Week­day, Week­end}

And sup­pose that we’ve been given some ex­am­ples of Days on which it was good, or al­ter­na­tively bad, to play ten­nis. For ex­am­ple, the Day {Sunny, Cool, Week­end} was good for play­ing ten­nis, but the day {Rainy, Hot, Week­day} was bad for play­ing ten­nis. A clas­sic task in ma­chine learn­ing is to in­duct, from a set of pre-clas­sified ex­am­ples like these, a rule de­scribing when it is good to play ten­nis.

Any pro­posed rule which can clas­sify all days as good or bad is a con­cept, in the lingo of ma­chine learn­ing. “Sunny Days” is a con­cept; like­wise “Sunny Cool Days”, and “Days which are ei­ther Cool or Sunny”. Each of these is a con­cept which clas­sifies all 12 pos­si­ble days ei­ther pos­i­tively or nega­tively—in­stances or non-in­stances of the con­cept.

There are 212 pos­si­ble con­cepts over the 12 pos­si­ble Days. Why so many? Be­cause—for ex­am­ple—there’s a con­cept which only in­cludes the two Days {Sunny+Cool+Week­day} and {Cloudy+Cool+Week­end}}, but clas­sifies all other Days as non­in­stances. This is a way of clas­sify­ing all Days into in­stances or non­in­stances, hence a pos­si­ble con­cept. It’s not a com­pact con­cept, but it’s a con­cept. Each Day can be clas­sified ei­ther pos­i­tively or nega­tively—one bi­nary de­ci­sion per Day—so 212 pos­si­ble con­cepts. (That’s why in­duc­tion is a difficult prob­lem in ma­chine learn­ing.)

The con­cept “Sunny” is a su­per­con­cept of “Sunny and Cool”; it lies above it in the lat­tice of ab­strac­tion, since all days which are “Sunny and Cool” are “Sunny”. “Sunny or Hot” is a su­per­cat­e­gory of “Sunny”. “Week­end” is nei­ther a su­per­con­cept nor a sub­con­cept of “Sunny”.

Con­cepts form a di­rected lat­tice from most gen­eral to most spe­cific, with “all Days” at the top (ev­ery Day clas­sified as an in­stance) and “no Days” at the bot­tom (the con­cept which clas­sifies ev­ery Day as a non­in­stance).

If you now go back to the prob­lem of tel­ling some­one what “red” means, when you say “red is a color”, then, even if the listener does hap­pen to know what “color” means, you’re still mov­ing up­ward in the lat­tice of ab­strac­tion. When you said “color”, you were talk­ing about a con­cept that in­cluded all red things, but also many other things that were not red.

“Our soft­ware is pro­vid­ing the bet­ter con­nec­tions for peo­ple”—the en­trepreneur who said that might have had some­thing spe­cific in mind, or they might have just been bluffing or suc­cumb­ing to wish­ful think­ing. But they de­scribed it us­ing an ab­stract state­ment so broad that it in­cluded Face­book, or Western Union back when they were send­ing tele­grams. They might—though this is some­what op­ti­mistic—they might have known them­selves what they had in mind; they didn’t think of Face­book; so they didn’t re­al­ize how many other pos­si­bil­ities fit their words. This is a clas­sic man­i­fes­ta­tion of the Illu­sion of Trans­parency, and it’s why we have to keep tel­ling peo­ple to nav­i­gate the lat­tice down­ward.

The skill of Be­ing Spe­cific is the skill of un­der­stand­ing how to nav­i­gate the lat­tice of ab­strac­tion. You can see why this would be a key el­e­ment of cog­ni­tion on a par with Bayes’s The­o­rem or con­se­quen­tial­ism.

And this is true in prac­tice as well as the­ory. When I’m talk­ing to any­one out­side the lo­cal LW com­mu­nity, I find that a very large amount of my con­ver­sa­tion in­volves re­peat­edly ask­ing them to be more spe­cific—and if you think that’s just me be­ing an­noy­ing, watch Paul Gra­ham in the video.


A closely re­lated skill is con­crete­ness, which has to do with near­ness-to-sen­sory-ex­pe­rience or ac­tion­abil­ity.

Ac­cord­ing to David Allen’s “Get­ting Things Done”, for your brain to stop think­ing about an un­finished task, you must (1) know and trust that an ex­ter­nal sys­tem will re­mind you to perform that task when it is time to perform it, and (2) have cho­sen the next ac­tion taken at a suffi­ciently con­crete level that your brain is no longer try­ing to plan it out in the back­ground. “Con­tact Luke about dis­pers­ing prize awards” is not a suffi­ciently con­crete to-do; it leaves open the ques­tion of whether to phone or email, and what ex­actly to say. “Read through the com­ments, gather the LessWrong user­names of ev­ery­one who made a sug­ges­tion we tried or adopted, and email the list to Luke” is an ac­tion item I know how to perform straight­for­wardly, with­out my brain try­ing to plan it in the back­ground. When you have a trust­wor­thy ex­ter­nal sys­tem to re­mind you of what to do, at the time you need to do it—so that the back of your mind isn’t wor­ry­ing about re­mem­ber­ing to check the to-do list—and all to-do items have been con­cretized to the point of be­ing ex­e­cutable with­out fur­ther back­ground plan­ning—then you have, in GTD par­lance, “got­ten to zero”, a state of pure men­tal bliss­ful­ness in which your brain is not wor­ry­ing about any­thing ex­cept what you’re do­ing right now.

Similarly, for a state­ment like “Wulky Wilk­insen is a post-utopian” or “Earth grav­ity pulls at 9.8 me­ters per sec­ond squared” to be falsifi­able, it must be con­cretized—ren­dered near-to-ex­pe­rience—to a suffi­cient de­gree that you can po­ten­tially see some­thing and say “Oh, guess the hy­poth­e­sis was wrong”; you must be able to have an ex­pe­rience which the con­cretized state­ment con­strains, and which falsifies the the­ory if the ex­pe­rience is out-of-bounds.

The­o­ret­i­cally: If you imag­ine the uni­verse as a huge di­rected graph of causes and effects—the Great Web of Causal­ity—then “con­crete­ness” is be­ing near enough in the Web to ei­ther your sen­sory in­puts or mo­tor out­puts that you can di­rectly see the pre­dic­tion un­fold, or di­rectly im­ple­ment the plan, with­out much fur­ther thought.

“Be Spe­cific” and “Be Con­crete” could eas­ily end up be­ing the same unit—they’re closely re­lated—and we’re happy to en­ter­tain ex­er­cises for Be­ing Con­crete, as well as Be­ing Spe­cific. Vi­su­al­iz­ing what your cus­tomer liter­ally sees or does af­ter nav­i­gat­ing to your site, would’ve been a good first step to­ward be­ing able to an­swer many of Paul Gra­ham’s ques­tions.


A pos­si­ble suc­cess crite­rion:

One ques­tion that we spent a lot of time dis­cussing at CMR, was trans­lat­ing our sense of “spe­cific enough” or “con­crete enough” into a de­scrib­able crite­rion. (In­stead of just a word­less in­tu­ition for when some­thing is “too ab­stract”.)

There was an ex­change in Paul Gra­ham’s office hours that went like this, while in­ter­view­ing a startup that did met­rics—an­a­lyz­ing pageviews, roughly—and the en­trepreneur was hav­ing great trou­ble de­scribing what they did that MixPanel didn’t. It went on for a while. It was painful to watch.

Paul: I don’t get what the differ­ence is. I still don’t get what the differ­ence is. What’s the differ­ence be­tween you and MixPanel?

En­trepreneur: The differ­ence is—when you have to sup­ple­ment—they’re a view com­pany and we’re a plat­form. That’s what it comes down to. They’re like a view, a re­port­ing com­pany. If you need some­thing they don’t have, a fea­ture -

Harj: So what’s an ex­am­ple of some­where you’d use your thing over MixPanel? Can you give a use-case?

En­trepreneur: Yeah, I mean, we had rev­enue on day zero. There’s a good rea­son for um… it’s a start up, it’s a se­ries A com­pany in the daily deals space. One we’ve signed a so­cial game com­pany to -

Harj: And why do they pre­fer your thing?

Paul: That wasn’t what Harj was ask­ing.

The prob­lem (from the per­spec­tive of our pre­sent dis­cus­sion) is that the En­trepreneur did not un­der­stand that Paul and Harj were re­peat­edly ask­ing him to move down­ward on the lad­der of ab­strac­tion. When the En­trepreneur said “We had rev­enue on day zero”, he was try­ing to offer con­fir­ma­tion of the ab­stract state­ment “We can do things MixPanel can’t”, but Paul and Harj still had no idea what his startup ac­tu­ally did.[1]

A quick bit of the­o­ret­i­cal back­ground: There’s an im­por­tant differ­ence, in the field of math­e­mat­i­cal logic, be­tween mod­els and ax­ioms. An ax­iom is some­thing like “All kit­tens are cute”, i.e. “All x: kit­ten(x)->cute(x)”. A model is a par­tic­u­lar uni­verse of ob­jects that in­cludes {Obj #19834, kit­ten: T, cute: T, color: grey} and {Obj #19835, kit­ten: F, cute: F, color: striped}, and so on.

Cor­re­spond­ingly, in log­i­cal in­fer­ence, there’s a dis­tinc­tion be­tween model-check­ing and de­duc­tion. Sup­pose you want to know whether it’s true that all pos­i­tive in­te­gers less than 5, when mul­ti­plied by 7, are less than 50. If you prove the gen­eral truth that all in­te­gers less than 5, times 7, are less than 35, by ma­nipu­lat­ing the ax­ioms of mul­ti­pli­ca­tion and in­equal­ity, that’s de­duc­tion. If you no­tice that the only pos­i­tive in­te­gers less than 5 are just {1, 2, 3, 4} and enu­mer­ate their prod­ucts {7, 14, 21, 28}, which are all less than 50, that’s model-check­ing.

My hy­poth­e­sis about what it means to be “spe­cific enough” or “con­crete enough” is that the pic­ture painted is de­tailed enough to use in model-check­ing what­ever points are be­ing de­bated. Paul and Harj don’t want to trust you when you state the ab­stract gen­er­al­iza­tion, “We’re bet­ter than MixPanel”. They aren’t even con­tent with de­duc­ing sup­port for this gen­er­al­iza­tion from the fur­ther gen­er­al­iza­tion, “We already have cus­tomers.” They want a pic­ture of some­thing you do that MixPanel doesn’t, which is de­tailed enough that they can model-check whether you have a com­pet­i­tive ad­van­tage.

Not to men­tion that Paul Gra­ham is prob­a­bly think­ing about a num­ber of other ques­tions:

  • How much would I pay for this product?

  • Is this startup ex­cit­ing enough that I would tweet about us­ing it?

  • How much re­sources will it take to de­velop these fea­tures fur­ther?

Paul Gra­ham doesn’t want you to say, “$50, yes, and twenty en­g­ineer-months”. He wants a suffi­ciently spe­cific pic­ture of (a cus­tomer us­ing) your product that he can ar­rive at his own an­swers by model-check­ing.

If Paul Gra­ham is read­ing this, he’s wel­come to con­tra­dict my in­ter­pre­ta­tion of what was go­ing on in that par­tic­u­lar ses­sion—but it did seem like a very nice con­crete illus­tra­tion.

That’s my guess for what of­ten con­sti­tutes “spe­cific enough”—though I’m not sure that’s the only thing that ever de­ter­mines spe­cific-enough­ness.

[1]: The strange part was, near the end of that ses­sion, it started to look like this might be an in­ter­est­ing startup; that the En­trepreneur wasn’t just bluffing. Their ac­tual use-case was to let cus­tomers eas­ily roll their own code to mea­sure, e.g., the page-view­ing be­hav­ior of only cus­tomers who’d bought more than $200 worth of stuff, which allegedly MixPanel wouldn’t let you do. Which would’ve been a perfectly good an­swer if the En­trepreneur had given it at the start of the ses­sion, in­stead of the whole ses­sion be­ing about Paul and Harj try­ing to get at that in­for­ma­tion.


Five-sec­ond-level skill:

The 5SL skill for this prob­lem re­quires:

  • Trig­ger: Rec­og­niz­ing when your words or thoughts are too ab­stract.

  • Ac­tion: Mov­ing down­ward in the ab­strac­tion lat­tice, or mov­ing nearer to sense in­put or mo­tor out­put; be­ing able to ren­der your thoughts more spe­cific or more con­crete.

Both of these are tar­getable for ex­er­cises.


Pain points & Pluses:

• You want Paul Gra­ham to be­lieve your startup is bet­ter than MixPanel. So you say, “My startup is bet­ter than MixPanel”—just pro­duce the pure ab­stract con­clu­sion you want Paul Gra­ham to ar­rive at. You keep try­ing to con­vince Paul Gra­ham of this state­ment, say­ing that you have cus­tomers or that you have ven­ture cap­i­tal, but never ac­tu­ally move down­ward to the level where Paul Gra­ham could ar­rive at this con­clu­sion by model-check­ing.

• You want to de­scribe what your soft­ware does, so you say it makes con­nec­tions be­tween peo­ple. You have some­thing spe­cific in mind, but the words com­ing out of your mouth are so gen­eral that—al­though you’re not think­ing of those other cases—they could ap­ply equally well to Face­book or tele­graph lines. Paul Gra­ham has no idea at all what you’re try­ing to de­scribe and is giv­ing you blank looks.

• The worse ver­sion—and the rea­son why Paul Gra­ham doesn’t just trust you, even if he thinks you’re hon­est—is the case where you your­self want to be­lieve your startup is bet­ter than Face­book, but you can’t think of any spe­cific thing your startup does bet­ter than Face­book, so you think of other ab­stract gen­er­al­iza­tions that seem to sup­port the con­clu­sion, like “We have smarter peo­ple” or “We got more fund­ing ear­lier.” Where fuzzy think­ing is mo­ti­vated, overly ab­stract think­ing is mo­ti­vated.

• Ab­stract words can also avoid emo­tion. Ge­orge Or­well: “Defence­less villages are bom­barded from the air, the in­hab­itants driven out into the coun­tryside, the cat­tle ma­chine-gunned, the huts set on fire with in­cen­di­ary bul­lets: this is called paci­fi­ca­tion.” Or con­trast “Hu­man­ity is awful, it’d be bet­ter for the planet if we all died” to “Every­one in­clud­ing my lit­tle sister is awful, we’d be bet­ter off if ev­ery­one died in­clud­ing her.” To feel sym­pa­thy, we need enough con­crete de­tail that our emo­tions can model-check the pic­ture and be ac­ti­vated.

• Cog­ni­tive-be­hav­ioral ther­apy is the big ex­per­i­men­tally sup­ported ver­sion of ther­apy, for any­one not aware of this, bear­ing very lit­tle re­sem­blance to any­thing Freudian. CBT talks about us­ing re­quests for spe­cific de­tails to in­ter­rupt thoughts loop­ing around vague but af­fec­tively laden cen­ters, like “I am a good hus­band”, “I am a bad hus­band”, or “my room­mate is a slob”. How are you a good hus­band? How are you a bad hus­band? Which spe­cific fea­ture of your room­mate are you ob­ject­ing to? Ta­boo the emo­tion­ally va­lent word at the cen­ter, like “slob”, and re­place it with some­thing that’s spe­cific enough to be testable, or con­crete enough to be acted upon.

•• Con­trast also “It both­ers me when you leave soda cans on the table” vs. “You’re such a slob, stop be­ing such a slob.” Or con­trast: “I’m up­set” → “I’m up­set be­cause I think the other per­son is look­ing down on me” → “I’m up­set be­cause the per­son’s tone of voice sounds like peo­ple who looked down on me in high school”. This is re­lated to the in­cred­ibly im­por­tant skill, search for the his­tor­i­cal causes of your thoughts, rather than their jus­tifi­ca­tions.

• Fo­cus­ing on the spe­cific de­tails of a con­crete ex­am­ple, in­stead of re­peat­ing a word or ar­gu­ing about a cat­e­gory, can in­ter­rupt Sneak­ing in Con­no­ta­tions and Ar­gu­ing By Defi­ni­tion.

• All the failures of con­crete­ness warned against in the Mys­te­ri­ous An­swers se­quence, where you go on and on about how Wulky Wilk­insen is a post-utopian with­out ever once ask­ing or imag­in­ing how the world ought to look, and what you your­self should ex­pe­rience, if that were true or al­ter­na­tively false.

• Vi­su­al­iz­ing spe­cific ex­am­ples of­ten im­proves qual­ity of thought in gen­eral—we’re of­ten smarter when we’re us­ing both model-check­ing and de­duc­tion, vi­su­al­iz­ing a pic­ture of what we’re sup­posed to be rea­son­ing about, con­stantly check­ing our de­duc­tive steps against some spe­cific model those de­duc­tions are sup­posed to be true about. Saith Richard Feyn­man:

I had a scheme, which I still use to­day when some­body is ex­plain­ing some­thing that I’m try­ing to un­der­stand: I keep mak­ing up ex­am­ples. For in­stance, the math­e­mat­i­ci­ans would come in with a ter­rific the­o­rem, and they’re all ex­cited. As they’re tel­ling me the con­di­tions of the the­o­rem, I con­struct some­thing which fits all the con­di­tions. You know, you have a set (one ball) - dis­joint (two halls). Then the balls turn col­ors, grow hairs, or what­ever, in my head as they put more con­di­tions on. Fi­nally they state the the­o­rem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!”

If it’s true, they get all ex­cited, and I let them go on for a while. Then I point out my coun­terex­am­ple.

“Oh. We for­got to tell you that it’s Class 2 Haus­dorff ho­mo­mor­phic.”

“Well, then,” I say, “It’s triv­ial! It’s triv­ial!”

• Be­ing spe­cific helps no­tice and call bluffs, should you be mischievously in­clined.

“Be­ware, de­mon!” he in­toned hol­lowly. “I am not with­out defenses.”
″Oh yeah? Name three.”
Robert Asprin, Another Fine Myth

Wannabe ex­ec­u­tive: “I will im­prove com­mu­ni­ca­tions be­tween em­ploy­ees and man­age­ment.”
Me: “Can you give me a spe­cific ex­am­ple of how you would do that?”


Known ex­er­cises for this skill:

In our pre­vi­ous Ra­tion­al­ity Camps, Anna found that her at­tempt to teach a unit on “Be­ing Spe­cific” didn’t seem to work. Her cen­tral ex­er­cise was pick­ing a cat­e­gory and ask­ing peo­ple to name ex­am­ples.

This isn’t to say that the Camps were un­suc­cess­ful at teach­ing the skill. At­ten­dees picked it up, not from the ex­plicit unit, but from all the in­struc­tors hav­ing to re­peat­edly ask the at­ten­dees to be more spe­cific, and then hav­ing to ask them again, while be­ing spe­cific them­selves, un­til the at­ten­dees picked up the rhythm by ex­am­ple and feed­back.

Given our pre­sent teach­ing tech­nol­ogy, this skill seems trans­mis­si­ble from mas­ter to ap­pren­tice, but not yet repli­ca­ble by ex­er­cises. That’s why we’re turn­ing it over to you.