Hold Off On Proposing Solutions

From Robyn Dawes’s Ra­tional Choice in an Uncer­tain World.1 Bold­ing added.

Nor­man R. F. Maier noted that when a group faces a prob­lem, the nat­u­ral ten­dency of its mem­bers is to pro­pose pos­si­ble solu­tions as they be­gin to dis­cuss the prob­lem. Con­se­quently, the group in­ter­ac­tion fo­cuses on the mer­its and prob­lems of the pro­posed solu­tions, peo­ple be­come emo­tion­ally at­tached to the ones they have sug­gested, and su­pe­rior solu­tions are not sug­gested. Maier en­acted an edict to en­hance group prob­lem solv­ing: “Do not pro­pose solu­tions un­til the prob­lem has been dis­cussed as thor­oughly as pos­si­ble with­out sug­gest­ing any.” It is easy to show that this edict works in con­texts where there are ob­jec­tively defined good solu­tions to prob­lems.

Maier de­vised the fol­low­ing “role play­ing” ex­per­i­ment to demon­strate his point. Three em­ploy­ees of differ­ing abil­ity work on an as­sem­bly line. They ro­tate among three jobs that re­quire differ­ent lev­els of abil­ity, be­cause the most able—who is also the most dom­i­nant—is strongly mo­ti­vated to avoid bore­dom. In con­trast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to ro­ta­tion be­cause of the dom­i­nance of his able co-worker. An “effi­ciency ex­pert” notes that if the most able em­ployee were given the most difficult task and the least able the least difficult, pro­duc­tivity could be im­proved by 20%, and the ex­pert recom­mends that the em­ploy­ees stop ro­tat­ing. The three em­ploy­ees and . . . a fourth per­son des­ig­nated to play the role of fore­man are asked to dis­cuss the ex­pert’s recom­men­da­tion. Some role-play­ing groups are given Maier’s edict not to dis­cuss solu­tions un­til hav­ing dis­cussed the prob­lem thor­oughly, while oth­ers are not. Those who are not given the edict im­me­di­ately be­gin to ar­gue about the im­por­tance of pro­duc­tivity ver­sus worker au­ton­omy and the avoidance of bore­dom. Groups pre­sented with the edict have a much higher prob­a­bil­ity of ar­riv­ing at the solu­tion that the two more able work­ers ro­tate, while the least able one sticks to the least de­mand­ing job—a solu­tion that yields a 19% in­crease in pro­duc­tivity.

I have of­ten used this edict with groups I have led—par­tic­u­larly when they face a very tough prob­lem, which is when group mem­bers are most apt to pro­pose solu­tions im­me­di­ately. While I have no ob­jec­tive crite­rion on which to judge the qual­ity of the prob­lem solv­ing of the groups, Maier’s edict ap­pears to foster bet­ter solu­tions to prob­lems.

This is so true it’s not even funny. And it gets worse and worse the tougher the prob­lem be­comes. Take ar­tifi­cial in­tel­li­gence, for ex­am­ple. A sur­pris­ing num­ber of peo­ple I meet seem to know ex­actly how to build an ar­tifi­cial gen­eral in­tel­li­gence, with­out, say, know­ing how to build an op­ti­cal char­ac­ter rec­og­nizer or a col­lab­o­ra­tive fil­ter­ing sys­tem (much eas­ier prob­lems). And as for build­ing an AI with a pos­i­tive im­pact on the world—a Friendly AI, loosely speak­ing—why, that prob­lem is so in­cred­ibly difficult that an ac­tual ma­jor­ity re­solve the whole is­sue within fif­teen sec­onds.2 Give me a break.

This prob­lem is by no means unique to AI. Physi­cists en­counter plenty of non­physi­cists with their own the­o­ries of physics, economists get to hear lots of amaz­ing new the­o­ries of eco­nomics. If you’re an evolu­tion­ary biol­o­gist, any­one you meet can in­stantly solve any open prob­lem in your field, usu­ally by pos­tu­lat­ing group se­lec­tion. Et cetera.

Maier’s ad­vice echoes the prin­ci­ple of the bot­tom line, that the effec­tive­ness of our de­ci­sions is de­ter­mined only by what­ever ev­i­dence and pro­cess­ing we did in first ar­riv­ing at our de­ci­sions—af­ter you write the bot­tom line, it is too late to write more rea­sons above. If you make your de­ci­sion very early on, it will, in fact, be based on very lit­tle thought, no mat­ter how many amaz­ing ar­gu­ments you come up with af­ter­ward.

And con­sider fur­ther­more that we change our minds less of­ten than we think: 24 peo­ple as­signed an av­er­age 66% prob­a­bil­ity to the fu­ture choice thought more prob­a­ble, but only 1 in 24 ac­tu­ally chose the op­tion thought less prob­a­ble. Once you can guess what your an­swer will be, you have prob­a­bly already de­cided. If you can guess your an­swer half a sec­ond af­ter hear­ing the ques­tion, then you have half a sec­ond in which to be in­tel­li­gent. It’s not a lot of time.

Tra­di­tional Ra­tion­al­ity em­pha­sizes falsifi­ca­tion—the abil­ity to re­lin­quish an ini­tial opinion when con­fronted by clear ev­i­dence against it. But once an idea gets into your head, it will prob­a­bly re­quire way too much ev­i­dence to get it out again. Worse, we don’t always have the lux­ury of over­whelming ev­i­dence.

I sus­pect that a more pow­er­ful (and more difficult) method is to hold off on think­ing of an an­swer. To sus­pend, draw out, that tiny mo­ment when we can’t yet guess what our an­swer will be; thus giv­ing our in­tel­li­gence a longer time in which to act.

Even half a minute would be an im­prove­ment over half a sec­ond.

1Robyn M. Dawes, Ra­tional Choice in An Uncer­tain World, 1st ed., ed. Jerome Ka­gan (San Diego, CA: Har­court Brace Jo­vanovich, 1988), 55–56.

2See Yud­kowsky, “Ar­tifi­cial In­tel­li­gence as a Pos­i­tive and Nega­tive Fac­tor in Global Risk.”