Distinctions in Types of Thought

Epistemic sta­tus: speculative

For a while, I’ve had the in­tu­ition that cur­rent ma­chine learn­ing tech­niques, though pow­er­ful and use­ful, are sim­ply not touch­ing some of the func­tions of the hu­man mind. But be­fore I can re­ally get at how to jus­tify that in­tu­ition, I would have to start clar­ify­ing what differ­ent kinds of think­ing there are. I’m go­ing to be rein­vent­ing the wheel a bit here, not hav­ing read that much cog­ni­tive sci­ence, but I wanted to write down some of the dis­tinc­tions that seem im­por­tant, and try­ing to see whether they over­lap. A lot of this is in­spired by Dreyfus’ Be­ing-in-the-World. I’m also try­ing to think about the ques­tions raised in the post “What are In­tel­lect and In­stinct?”

Effort­ful vs. Effortless

In English, we have differ­ent words for per­ceiv­ing pas­sively ver­sus ac­tively pay­ing at­ten­tion. To see vs. to look, to hear vs. to listen, to touch vs. to feel. To go look­ing for a sen­sa­tion means ex­ert­ing a sort of men­tal pres­sure; in other words, effort. William James, in his Prin­ci­ples of Psy­chol­ogy, said “At­ten­tion and effort, as we shall see later, are but two names for the same psy­chic fact.” He says, in his fa­mous in­tro­duc­tion to at­ten­tion, that

Every one knows what at­ten­tion is. It is the tak­ing pos­ses­sion by the mind, in clear and vivid form, of one out of what seem sev­eral si­mul­ta­neously pos­si­ble ob­jects or trains of thought. Lo­cal­iza­tion, con­cen­tra­tion, of con­scious­ness are of its essence. It im­plies with­drawal from some things in or­der to deal effec­tively with oth­ers, and is a con­di­tion which has a real op­po­site in the con­fused, dazed, scat­ter­brained state which in French is called dis­trac­tion, and Zer­streutheit in Ger­man.

Fur­ther­more, James di­vides at­ten­tion into two types:

Pas­sive, re­flex, non-vol­un­tary, effortless ; or Ac­tive and vol­un­tary.

In other words, the mind is always se­lect­ing cer­tain ex­pe­riences or thoughts as more salient than oth­ers; but some­times this is done in an au­to­matic way, and some­times it’s effort­ful/​vol­un­tary/​ac­tive. A fluent speaker of a lan­guage will au­to­mat­i­cally no­tice a gram­mat­i­cal er­ror; a be­gin­ner will have to try effort­fully to catch the er­ror.

In the fa­mous go­rilla ex­per­i­ment where sub­jects in­structed to count passes in a bas­ket­ball game failed to no­tice a go­rilla on the bas­ket­ball field, “count­ing the passes” is pay­ing effort­ful at­ten­tion, while “notic­ing the go­rilla” would be effortless or pas­sive notic­ing.

Ac­tivity in “flow” (play­ing a mu­si­cal piece by mus­cle mem­ory, or spa­tially nav­i­gat­ing one’s own house) is effortless; ac­tivi­ties one learns for the first time are effort­ful.

Oliver Sacks’ case stud­ies are full of sto­ries that illus­trate the im­por­tance of flow. Peo­ple with mo­tor di­s­or­ders like Park­in­son’s can of­ten dance or walk in rhythm to mu­sic, even when or­di­nary walk­ing is difficult; peo­ple with mem­ory prob­lems some­times can still re­cite verse; peo­ple who can­not speak can some­times still sing. “Fluent” ac­tivi­ties can re­main un­dam­aged when similar but more “de­liber­a­tive” ac­tivi­ties are lost.

The au­thor of in­tel­lec­tu­al­iz­ing.net thinks about this in the con­text of be­ing an autis­tic par­ent of an autis­tic son:

Long ago some­where I can’t re­mem­ber, I read a dis­cus­sion of know­ing what vs. know­ing how. The au­thor’s thought ex­per­i­ment was about walk­ing. Imag­ine walk­ing with con­scious plan­ning, think­ing con­sciously about each mus­cle and move­ment in­volved. At­tempt­ing to do this makes us ter­rible at walk­ing.

When I find my­self strug­gling with so­cial or mo­tor skills, this is the feel­ing. My im­pres­sion of my son is the same. Rather than try­ing some­thing, play­ing, ex­per­i­ment­ing he wants the sys­tem first. First or­ga­nize and an­a­lyze it, then care­fully and cau­tiously we might try it.

A sim­ple ex­am­ple. There’s a cur­ricu­lum for writ­ing called Hand­writ­ing Without Tears. De­spite teach­ing him­self to read when barely 2, my son re­fused to even try to write. Then some­one showed him this cur­ricu­lum in which let­ters are bro­ken down into three named cat­e­gories ac­cord­ing to how you write them; and then each let­ter has num­bered strokes to be done in se­quence. Sud­denly my son was in­ter­ested in writ­ing. He ap­proached it by first mem­o­riz­ing the whole Hand­writ­ing Without Tears sys­tem, and only then was he will­ing to try to write. I be­lieve this is not how most 3-year-olds work, but this is how he works.

One sim­ple study (“Chil­dren with autism do not over­im­i­tate”) had to do with chil­dren copy­ing “un­nec­es­sary” or “silly” ac­tions. Given a demon­stra­tion by an adult, autis­tic kids would edit out pointless steps in the demon­strated pro­ce­dure. Think about what’s re­quired to do this: the pro­ce­dure has to be re­con­structed from first prin­ci­ples to edit the silly out. The autis­tic kids didn’t take some­one’s word for it, they wanted to start over.

The au­thor and his son learn skills by effort­ful con­scious plan­ning that most peo­ple learn by “pick­ing up” or “os­mo­sis” or “flow.”

Most of the ac­tivity de­scribed by Hei­deg­ger’s Be­ing and Time, and Dreyfus’ com­men­tary Be­ing-In-The-World, is effortless flow-state “skil­led cop­ing.” Han­dling a fa­mil­iar piece of equip­ment, like typ­ing on a key­board, is a pro­to­typ­i­cal ex­am­ple. You’re not think­ing about how to do it ex­cept when you’re learn­ing how for the first time, or if it breaks or be­comes “dis­fluent” in some way. If I’m in­ter­pret­ing him cor­rectly, I think Dreyfus would say that neu­rotyp­i­cal adults spend most of their time, minute-by-minute, in an effortless flow state, punc­tu­ated by oc­ca­sions when they have to plan, try hard, or figure some­thing out.

William James would agree that vol­un­tary at­ten­tion oc­cu­pies a minor­ity of our time:

There is no such thing as vol­un­tary at­ten­tion sus­tained for more than a few sec­onds at a time. What is called sus­tained vol­un­tary at­ten­tion is a rep­e­ti­tion of suc­ces­sive efforts which bring back the topic to the mind.

(This echoes the stan­dard ad­vice in mind­ful­ness med­i­ta­tion that you’re not aiming for get­ting the longest pos­si­ble pe­riod of un­in­ter­rupted fo­cus, you’re train­ing the men­tal mo­tion of re­turn­ing fo­cus from mind-wan­der­ing.)

Effort­ful at­ten­tion can also be viewed as the cog­ni­tive ca­pac­i­ties which stim­u­lants im­prove. Re­ac­tion times shorten, and peo­ple dis­t­in­guish and re­mem­ber the stim­uli in front of them bet­ter.

It’s im­por­tant to note that not all fo­cused at­ten­tion is effort­ful at­ten­tion. If you are play­ing a fa­mil­iar piece on the pi­ano, you’re in a flow state, but you’re still be­ing “fo­cused” in a sense; you’re notic­ing the mu­sic more than you’re notic­ing con­ver­sa­tion in an­other room, you’re play­ing this piece rather than any other, you’re sit­ting un­in­ter­rupted at the pi­ano rather than mul­ti­task­ing. Effortless flow can be ex­tremely se­lec­tive and hy­per-fo­cused (like play­ing the pi­ano), just as much as it can be diffuse, re­spon­sive, and eas­ily in­ter­rupt­ible (like nav­i­gat­ing a crowded room). It’s not the size of your win­dow of salience that dis­t­in­guishes flow from effort­ful at­ten­tion, it’s the pres­sure that you ap­ply to that win­dow.

Psy­chol­o­gists of­ten call effort­ful at­ten­tion cog­ni­tive dis­fluency, and find that ex­pe­riences of dis­fluency (such as a difficult-to-read font) im­prove syl­l­o­gis­tic rea­son­ing and re­duce re­li­ance on heuris­tics, while mak­ing peo­ple more likely to make ab­stract gen­er­al­iza­tions. Dis­fluency im­proves re­sults on mea­sures of “care­ful think­ing” like the Cog­ni­tive Reflec­tion Test as well as on real-world high-school stan­dard­ized tests, and also makes peo­ple less likely to con­fess em­bar­rass­ing in­for­ma­tion on the in­ter­net. In other words, dis­fluency makes peo­ple “think be­fore they act.” Dis­fluency raises heart rate and blood pres­sure, just like ex­er­cise, and peo­ple re­port it as be­ing difficult and re­li­ably dis­pre­fer it to cog­ni­tive ease. The psy­chol­ogy re­search seems con­sis­tent with there be­ing such a thing as “think­ing hard.” Effort­ful at­ten­tion oc­cu­pies a minor­ity of our time, but it’s promi­nent in the most speci­fi­cally “in­tel­lec­tual” tasks, from solv­ing for­mal prob­lems on pa­per to mak­ing pru­dent per­sonal de­ci­sions.

What does it mean, on a neu­rolog­i­cal or a com­pu­ta­tional level, to ex­pend men­tal effort? What, pre­cisely, are we do­ing when we “try hard”? I think it might be an open ques­tion.

Do the neu­ral net­works of to­day simu­late an agent in a state of “effortless flow” or “effort­ful at­ten­tion”, or both or nei­ther? My guess would be that deep neu­ral nets and re­in­force­ment learn­ers are gen­er­ally do­ing effortless flow, be­cause they ex­cel at the tasks that we gen­er­ally do in a flow state (pat­tern recog­ni­tion and mo­tor learn­ing.)

Ex­plicit vs. Implicit

Dreyfus, as an op­po­nent of the Rep­re­sen­ta­tional The­ory of Mind, be­lieves that (most of) cog­ni­tion is not only not based on a for­mal sys­tem, but not in prin­ci­ple for­mal­iz­able. He thinks you couldn’t pos­si­bly write down a the­ory or a set of rules that ex­plain what you’re do­ing when you drive a car, even if you had ar­bi­trary amounts of in­for­ma­tion about the brain and hu­man be­hav­ior and ar­bi­trary amounts of time to an­a­lyze them.

This dis­tinc­tion seems to in­clude the dis­tinc­tions of “declar­a­tive vs. pro­ce­du­ral knowl­edge”, “know-what vs. know-how”, savoir vs. con­naître. We can of­ten do, or rec­og­nize, things that we can­not ex­plain.

I think this is­sue is re­lated to the is­sue of in­ter­pretabil­ity in ma­chine learn­ing; the al­gorithm ex­e­cutes a be­hav­ior, but some­times it seems difficult or im­pos­si­ble to ex­plain what it’s do­ing in terms of a model that’s sim­pler than the whole al­gorithm it­self.

The sem­i­nal 2001 ar­ti­cle by Leo Breiman, “Statis­ti­cal Model­ing: The Two Cul­tures” and Peter Norvig’s es­say “On Chom­sky and the Two Cul­tures of Statis­ti­cal Learn­ing” are about this is­sue. The in­verse square law of grav­i­ta­tion and an n-gram Markov model for pre­dict­ing the next word in a sen­tence are both statis­ti­cal mod­els, in some sense; they al­low you to pre­dict the de­pen­dent vari­able given the in­de­pen­dent vari­ables. But the in­verse square law is in­ter­pretable (it makes sense to hu­mans) and ex­plana­tory (the vari­ables in the model match up to dis­tinct phe­nom­ena in re­al­ity, like masses and dis­tances, and so the model is a re­la­tion­ship be­tween things in the world.)

Modern ma­chine learn­ing mod­els, like the n-gram pre­dic­tor, have vast num­bers of vari­ables that don’t make sense to hu­mans and don’t ob­vi­ously cor­re­spond to things in the world. They perform well with­out be­ing ex­pla­na­tions. Statis­ti­ci­ans tend to pre­fer para­met­ric mod­els (which are in­ter­pretable and some­times ex­plana­tory) while ma­chine-learn­ing ex­perts use a lot of non-para­met­ric mod­els, which are com­plex and opaque but of­ten have bet­ter em­piri­cal perfor­mance. Crit­ics of ma­chine learn­ing ar­gue that a black-box model doesn’t bring un­der­stand­ing, and so is the province of en­g­ineer­ing rather than sci­ence. Defen­ders, like Norvig, flip open a ran­dom is­sue of Science and note that most of the ar­ti­cles are not dis­cov­er­ing the­o­ries but not­ing ob­ser­va­tions and cor­re­la­tions. Ma­chine learn­ing is just an­other form of pat­tern recog­ni­tion or “mod­el­ing the world”, which con­sti­tutes the bulk of sci­en­tific work to­day.

Th­ese are heuris­tic de­scrip­tions; these es­says don’t make ex­plicit how to test whether a model is in­ter­pretable or not. I think it prob­a­bly has some­thing to do with model size; is the model re­ducible to one with fewer pa­ram­e­ters, or not? If you think about it that way, it’s ob­vi­ous that “ir­re­ducibly com­plex” mod­els, of ar­bi­trary size, can ex­ist in prin­ci­ple — you can just build simu­lated data sets that fit them and can’t be fit by any­thing sim­pler.

How much of hu­man thought and be­hav­ior is “ir­re­ducible” in this way, re­sem­bling the huge black-box mod­els of con­tem­po­rary ma­chine learn­ing? Plau­si­bly a lot. I’m con­vinced by the ev­i­dence that vi­sual per­cep­tion runs on some­thing like con­volu­tional neu­ral nets, and I don’t ex­pect there to be “sim­pler” un­der­ly­ing laws. Peo­ple ac­cu­mu­late a lot of data and feed­back through life, much more than sci­en­tists ever do for an ex­per­i­ment, so they can “af­ford” to do as any good AI startup does, and es­chew struc­tured mod­els for open-ended, non-in­sight­ful ones, com­pen­sat­ing with an abun­dance of data.

Sub­ject-Ob­ject vs. Relational

This is a con­cept in Dreyfus that I found fairly hard to pin down, but the dis­tinc­tion seems to be op­er­at­ing upon the world vs. re­lat­ing to the world. When you are deal­ing with raw ma­te­rial — say you are a pot­ter with a piece of clay — you think of your­self as ac­tive and the clay as pas­sive. You have a goal (say, mak­ing a pot) and the clay has cer­tain prop­er­ties; how you act to achieve your goal de­pends on the clay’s prop­er­ties.

By con­trast, if you’re in­ter­act­ing with a per­son or an an­i­mal, or even just an ob­ject with a UI, like a stand mixer, you’re re­lat­ing to your en­vi­ron­ment. The stand mixer “lets you do” a small num­ber of things — you can change at­tach­ments or speeds, raise the bowl up and down, re­move the bowl, fill it with food or empty it. You ori­ent to these af­for­dances. You do not, in the or­di­nary pro­cess of us­ing a stand mixer, think about whether you could use it as a step-stool or a weapon or a paint­ing tool. (Though you might if you are a child, or an en­g­ineer or an artist.) Or­di­nar­ily you re­late in an al­most so­cial, al­most an­i­mist, way to the stand mixer. You use it as it “wants to be used”, or rather as its de­signer wants you to use it; you are “play­ing along” in some sense, be­ing re­cep­tive to the ex­ter­nal in­ten­tions you in­tuit.

And, of course, when we are re­lat­ing to other peo­ple, we do much stranger and harder-to-de­scribe things; we be­come differ­ent around them, we are no longer soli­tary agents pur­su­ing purely in­ter­nally-driven goals. There is such a thing as be­com­ing “part of a group.” There is the whole messy busi­ness of cul­ture.

For the most part, I don’t think ma­chine-learn­ing mod­els to­day are able to do ei­ther sub­ject-ob­ject or re­la­tional think­ing; the prob­lems they’re solv­ing are so sim­ple that nei­ther paradigm seems to ap­ply. “Learn how to work a stand mixer” or “Figure out how to make a pot out of clay” both seem be­yond the reach of any ar­tifi­cial in­tel­li­gence we have to­day.

Aware vs. Unaware

This is the differ­ence be­tween sight and blind­sight. It’s been shown that we can act on the ba­sis of in­for­ma­tion that we don’t know we have. Some blind peo­ple are much bet­ter than chance at guess­ing where a vi­sual stim­u­lus is, even though they claim sincerely to be un­able to see it. Be­ing primed by a cue makes blind­sight more ac­cu­rate — in other words, you can have at­ten­tion with­out aware­ness.

Anosog­nosia is an­other win­dow into aware­ness; it is the phe­nomenon when dis­abled peo­ple are not aware of their defic­its (which may be mo­tor, sen­sory, speech-re­lated, or mem­ory-re­lated.) In unilat­eral ne­glect, for in­stance, a stroke vic­tim might be un­aware that she has a left side of her body; she won’t eat the left half of her plate, make up the left side of her face, etc. Sen­sa­tions may still be pos­si­ble on the left side, but she won’t be aware of them. Squirt­ing cold wa­ter in the left ear can tem­porar­ily fix this, for un­known rea­sons.

Aware­ness doesn’t need to be ex­plicit or declar­a­tive; we aren’t for­mal­iz­ing words or sys­tems con­stantly when we go through or­di­nary wak­ing life. It also doesn’t need to be effort­ful at­ten­tion; we’re still aware of the sights and sounds that en­ter our at­ten­tion spon­ta­neously.

Effer­ence copy sig­nals seem to provide a clue to what’s go­ing on in aware­ness. When we act (such as to move a limb), we pro­duce an “effer­ence copy” of what we ex­pect our sen­sory ex­pe­rience to be, while si­mul­ta­neously we re­ceive the ac­tual sen­sory feed­back. “This pro­cess ul­ti­mately al­lows sen­sory reaf­fer­ents from mo­tor out­puts to be rec­og­nized as self-gen­er­ated and there­fore not re­quiring fur­ther sen­sory or cog­ni­tive pro­cess­ing of the feed­back they pro­duce.” This is what al­lows you to keep a ‘still’ pic­ture of the world even though your eyes are con­stantly mov­ing, and to tune out the sen­sa­tions from your own move­ments and the sound of your own voice.

Schizophren­ics may be ex­pe­rienc­ing a dys­func­tion of this self-mon­i­tor­ing sys­tem; they have “delu­sions of pas­sivity or thought in­ser­tion” (be­liev­ing that their move­ments or thoughts are con­trol­led from out­side) or “delu­sions of grandeur or refer­ence” (be­liev­ing that they con­trol things with their minds that they couldn’t pos­si­bly con­trol, or that things in the out­side world are “about” them­selves when they aren’t.) They have a prob­lem dis­t­in­guish­ing self-caused from ex­ter­nally-caused stim­uli.

We’re prob­a­bly keep­ing track, some­where in our minds, of things la­beled as “me” and “not me” (my limbs are part of me, the table next to me is not), sen­sa­tions that are self-caused and ex­ter­nally-caused, and maybe also ex­pe­riences that we la­bel as “ours” vs. not (we re­mem­ber them, they feel like they hap­pened to us, we can at­test to them, we be­lieve they were real rather than fan­tasies.)

It might be as sim­ple as just mak­ing a par­allel copy of in­for­ma­tion la­beled “self,” as the effer­ence-copy the­ory has it. And (prob­a­bly in a va­ri­ety of com­pli­cated and as-yet-un­known ways), our brains treat things differ­ently when they are tagged as “self” vs. “other.”

Maybe when ex­pe­riences are tagged as “self” or la­beled as mem­o­ries, we are aware that they are hap­pen­ing to us. Maybe we have a “Carte­sian the­ater” some­where in our brain, through which all ex­pe­riences we’re aware of pass, while the un­con­scious ex­pe­riences can still af­fect our be­hav­ior di­rectly. This is all spec­u­la­tion, though.

I’m pretty sure that cur­rent robots or ML sys­tems don’t have any spe­cial dis­tinc­tion be­tween ex­pe­riences in­side and out­side of aware­ness, which means that for all prac­ti­cal pur­poses they’re always op­er­at­ing on blind­sight.

Re­la­tion­ships and Corollaries

I think that, in or­der of the pro­por­tion of or­di­nary neu­rotyp­i­cal adult life they take up, aware­ness > effort­ful at­ten­tion > ex­plicit sys­tem­atic thought. When you look out the win­dow of a train, you are aware of what you see, but not us­ing effort­ful at­ten­tion or think­ing sys­tem­at­i­cally. When you are moun­tain-climb­ing, you are us­ing effort­ful at­ten­tion, but not think­ing sys­tem­at­i­cally very much. When you are writ­ing an es­say or a proof, you are us­ing effort­ful at­ten­tion, and us­ing sys­tem­atic thought more, though per­haps not ex­clu­sively.

I think aware­ness, in hu­mans, is nec­es­sary for effort­ful at­ten­tion, and effort­ful at­ten­tion is usu­ally in­volved in sys­tem­atic thought. (For ex­am­ple, no­tice how con­cen­tra­tion and cog­ni­tive dis­fluency im­prove the abil­ity to gen­er­al­ize or fol­low rea­son­ing prin­ci­ples.) . I don’t know whether those nec­es­sary con­di­tions hold in prin­ci­ple, but they seem to hold in prac­tice.

Which means that, since pre­sent-day ma­chine-learn­ers aren’t aware, there’s rea­son to doubt that they’re go­ing to be much good at what we’d call rea­son­ing.

I don’t think clas­sic plan­ning al­gorithms “can rea­son” ei­ther; they’re hard-cod­ing in the pro­ce­dures they fol­low, rather than gen­er­at­ing those pro­ce­dures from sim­pler per­cepts the way we do. It seems like the same sort of mi­s­un­der­stand­ing as it would be to claim a cam­era can see.

(As I’ve said be­fore, I don’t be­lieve any­thing like “ma­chines will never be able to think the way we do”, only that they’re not do­ing so now.)

The Weird­ness of Think­ing on Purpose

It’s pop­u­lar these days to “de­bunk” the im­por­tance of the “in­tel­lect” side of “in­tel­lect vs. in­stinct” think­ing. To point out that we aren’t always ra­tio­nal (true), are rarely think­ing effort­fully or ex­plic­itly (also true), can’t usu­ally re­duce our cog­ni­tive pro­cesses to for­mal sys­tems (also true), and can be deeply af­fected by sub­con­scious or sub­limi­nal pro­cesses (prob­a­bly true).

Fre­quently, this de­bunk­ing comes with a side or­der of sneer, whether at the de­funct “En­light­en­ment” or “au­thor­i­tar­ian high-mod­ernist” no­tion that ev­ery­thing in the mind can be sys­tem­atized, or at the pro­cess of ab­stract/​de­liber­ate thought it­self and the peo­ple who like it. Jonathan Haidt’s lec­ture on “The Ra­tion­al­ist Delu­sion” is a good ex­am­ple of this kind of sneer.

The prob­lem with the pop­u­lar “de­bunk­ing rea­son” frame is that it dis­tracts us from notic­ing that the ac­tual pro­cess of rea­son­ing, as prac­ticed by hu­mans, is a phe­nomenon we don’t un­der­stand very well yet. Sure, Descartes may have thought he had it all figured out, and he was wrong; but think­ing still ex­ists even af­ter you have re­jected naive ra­tio­nal­ism, and it’s a mis­take to as­sume it’s the “easy part” to un­der­stand. De­liber­a­tive think­ing, I would guess, is the hard part; that’s why the cog­ni­tive pro­cesses we un­der­stand best and can simu­late best are the more “prim­i­tive” ones like sen­sory per­cep­tion or mo­tor learn­ing.

I think it’s prob­a­bly bet­ter to think of those cog­ni­tive pro­cesses that dis­t­in­guish hu­mans from an­i­mals as weird and mys­te­ri­ous and spe­cial, as “higher-level” abil­ities, rather than ir­rele­vant and ves­ti­gial “de­gen­er­ate cases”, which is how Hei­deg­ger seems to see them. Even if the “higher” cog­ni­tive func­tions oc­cupy rel­a­tively lit­tle time in a typ­i­cal day, they have out­size im­por­tance in mak­ing hu­man life unique.

Two weirdly similar quotes:

“Three quick breaths trig­gered the re­sponses: he fell into the float­ing aware­ness… fo­cus­ing the con­scious­ness… aor­tal di­la­tion… avoid­ing the un­fo­cused mechanism of con­scious­ness… to be con­scious by choice… blood en­riched and swift-flood­ing the over­load re­gions… one does not ob­tain food-safety free­dom by in­stinct alone… an­i­mal con­scious­ness does not ex­tend be­yond the given mo­ment nor into the idea that its vic­tims may be­come ex­tinct… the an­i­mal de­stroys and does not pro­duce… an­i­mal plea­sures re­main close to sen­sa­tion lev­els and avoid the per­cep­tual… the hu­man re­quires a back­ground grid through which to see his uni­verse… fo­cused con­scious­ness by choice, this forms your grid… bod­ily in­tegrity fol­lows nerve-blood flow ac­cord­ing to the deep­est aware­ness of cell needs… all things/​cells/​be­ings are im­per­ma­nent… strive for flow-per­ma­nence within…”

–Frank Her­bert, Dune, 1965

“An an­i­mal’s con­scious­ness func­tions au­to­mat­i­cally: an an­i­mal per­ceives what it is able to per­ceive and sur­vives ac­cord­ingly, no fur­ther than the per­cep­tual level per­mits and no bet­ter. Man can­not sur­vive on the per­cep­tual level of his con­scious­ness; his senses do not provide him with an au­to­matic guidance, they do not give him the knowl­edge he needs, only the ma­te­rial of knowl­edge, which his mind has to in­te­grate. Man is the only liv­ing species who has to per­ceive re­al­ity, which means: to be con­scious — by choice. But he shares with other species the penalty for un­con­scious­ness: de­struc­tion. For an an­i­mal, the ques­tion of sur­vival is pri­mar­ily phys­i­cal; for man, pri­mar­ily episte­molog­i­cal.

“Man’s unique re­ward, how­ever, is that while an­i­mals sur­vive by ad­just­ing them­selves to their back­ground, man sur­vives by ad­just­ing his back­ground to him­self. If a drought strikes them, an­i­mals per­ish — man builds ir­ri­ga­tion canals; if a flood strikes them, an­i­mals per­ish — man builds dams; if a car­nivorous pack at­tacks them an­i­mals per­ish — man writes the Con­sti­tu­tion of the United States. But one does not ob­tain food, safety, or free­dom — by in­stinct.”

–Ayn Rand, For the New In­tel­lec­tual, 1963

(bold em­pha­sis added, el­lipses origi­nal).

“Con­scious by choice” seems to be point­ing at the phe­nomenon of effort­ful at­ten­tion, while “the un­fo­cused mechanism of con­scious­ness” is more like aware­ness. There seems to be some in­tu­ition here that effort­ful at­ten­tion is re­lated to the pro­duc­tive abil­ities of hu­man­ity, our abil­ity to live in greater se­cu­rity and with greater thought for the fu­ture than an­i­mals do. We don’t usu­ally “think on pur­pose”, but when we do, it mat­ters a lot.

We should be think­ing of “be­ing con­scious by choice” more as a sort of weird Bene Gesserit witchcraft than as ei­ther the de­fault state or as an ir­rele­vant aber­ra­tion. It is nei­ther the whole of cog­ni­tion, nor is it unim­por­tant — it is a spe­cial power, and we don’t know how it works.