David Marr on two types of information-processing problems

I found an es­say writ­ten by David Marr called Ar­tifi­cial In­tel­li­gence—a per­sonal view that I thought was fairly in­sight­ful. Marr first dis­cusses how in­for­ma­tion pro­cess­ing prob­lems are gen­er­ally solved:

The solu­tion to an in­for­ma­tion pro­cess­ing prob­lem di­vides nat­u­rally into two parts. In the first, the un­der­ly­ing na­ture of a par­tic­u­lar com­pu­ta­tion is char­ac­ter­ized, and its ba­sis in the phys­i­cal world is un­der­stood. One can think of this part as an ab­stract for­mu­la­tion of what is be­ing com­puted and why, and I shall re­fer to it as the “the­ory” of a com­pu­ta­tion. The sec­ond part con­sists of par­tic­u­lar al­gorithms for im­ple­ment­ing a com­pu­ta­tion, and so it speci­fies how.

This is rem­i­nis­cent of Marr’s three lev­els of anal­y­sis.

Next, Marr draws a dis­tinc­tion be­tween a Type 1 in­for­ma­tion pro­cess­ing prob­lem and a Type 2 prob­lem. A Type 1 prob­lem has a solu­tion that nat­u­rally di­vides along lines men­tioned above: first one can for­mu­late the com­pu­ta­tional the­ory be­hind it, and then one de­vises an al­gorithm to im­ple­ment the com­pu­ta­tion. Marr pro­poses, how­ever, that there is a class of prob­lems that doesn’t fit this de­scrip­tion:

The fly in the oint­ment is that while many prob­lems of biolog­i­cal in­for­ma­tion pro­cess­ing have a Type 1 the­ory, there is no rea­son why they should all have. This can hap­pen when a prob­lem is solved by the si­mul­ta­neous ac­tion of a con­sid­er­able num­ber of pro­cesses, whose in­ter­ac­tion is its own sim­plest de­scrip­tion, and I shall re­fer to such a situ­a­tion as a Type 2 the­ory. One promis­ing can­di­date for a Type 2 the­ory is the prob­lem of pre­dict­ing how a pro­tein will fold. A large num­ber of in­fluences act on a large polypep­tide chain as it flaps and flails in a medium. At each mo­ment only a few of the pos­si­ble in­ter­ac­tions will be im­por­tant, but the im­por­tance of those few is de­ci­sive. At­tempts to con­struct a sim­plified the­ory must ig­nore some in­ter­ac­tions; but if most in­ter­ac­tions are cru­cial at some stage dur­ing the fold­ing, a sim­plified the­ory will prove in­ad­e­quate.

More dis­cus­sion about Type 1 and Type 2 prob­lems fol­lows, but I’m not go­ing to sum­ma­rize it. It well-worth read­ing, how­ever. I did think this cri­tique of the GOFAI pro­gram was pretty sharp for hav­ing been for­mu­lated in 1977:

For very ad­vanced prob­lems like story-un­der­stand­ing, cur­rent re­search is of­ten purely ex­plo­ra­tory. That is to say, in these ar­eas our knowl­edge is so poor that we can­not even be­gin to for­mu­late the ap­pro­pri­ate ques­tions, let alone solve them

...

Most of the his­tory of A.I. (now fully 16 years old) has con­sisted of ex­plo­ra­tory stud­ies. Some of the best-known are Sla­gle’s [24] sym­bolic in­te­gra­tion pro­gram, Weizen­baum’s [30] Eliza pro­gram, Evans” [4] anal­ogy pro­gram, Raphaers [19] SIR, Quillian’s [18] se­man­tic nets and Wino­grad’s [32] Shrdlu. All of these pro­grams have (in ret­ro­spect) the prop­erty that they are ei­ther too sim­ple to be in­ter­est­ing Type 1 the­o­ries, or very com­plex yet perform too poorly to be taken se­ri­ously as a Type 2 theory

And yet many things have been learnt from these ex­pe­riences—mostly nega­tive things (the first 20 ob­vi­ous ideas about how in­tel­li­gence might work are too sim­ple or wrong)… The mis­takes made in the field lay not in hav­ing car­ried out such stud­ies—they formed an es­sen­tial part of its de­vel­op­ment—but con­sisted mainly in failures of judge­ment about their value, since it is now clear that few of the early stud­ies them­selves for­mu­lated any solv­able prob­lems.

If we ac­cept this tax­on­omy, then where does Friendli­ness fit in? My hunch is that it’s a Type 2 prob­lem. If this is so, what Type 1 prob­lems can be fo­cused on in the pre­sent?