Dennett’s “Consciousness Explained”: Prelude

I’m start­ing Den­nett’s “Con­scious­ness Ex­plained”. Den­nett says, in the in­tro­duc­tion, that he be­lieves he has solved the prob­lem of con­scious­ness. Since sev­eral peo­ple have referred to his work here with ap­proval, I’m go­ing to give it a go. I’m go­ing to post chap­ter sum­maries as I read, for my own self­ish benefit, so that you can point out when you dis­agree with my un­der­stand­ing of it. “D” will stand for Den­nett.

If you loathe the C-word, just stop now. That’s what the con­ve­nient break just be­low is for. You are re­spon­si­ble for your own wasted time if you pro­ceed.

Chpt. 1: Pre­lude: How are Hal­lu­ci­na­tions Pos­si­ble?

D de­scribes the brain in a vat, and asks how we can know we aren’t brains in vats. This dis­mays me, as it is one of those ques­tions that dis­tracts peo­ple try­ing to talk about con­scious­ness, that has noth­ing to do with the difficult prob­lems of con­scious­ness.

Den­nett states, with­out pre­sent­ing a sin­gle num­ber, that the band­width needs for re­pro­duc­ing our sen­sory ex­pe­rience would be so great that it is im­pos­si­ble (his ac­tual word); and that this proves that we are not brains in vats. Sigh.

He then asks how hal­lu­ci­na­tions are pos­si­ble: “How on earth can a sin­gle brain do what teams of sci­en­tists and com­puter an­i­ma­tors would find to be al­most im­pos­si­ble?” Sigh again. This is sur­pris­ing to Den­nett be­cause he be­lieves he has just es­tab­lished that the band­width needs for con­scious­ness are too great for any com­puter to provide; yet the brain some­times (dur­ing hal­lu­ci­na­tions) pro­vides nearly that much band­width. D has ap­par­ently for­got­ten that the brain pro­vides ex­actly, by defi­ni­tion, the con­scious­ness band­width of in­for­ma­tion to us all the time.

D re­counts Descartes’ re­mark­ably pre­scient dis­cus­sion of the bel­lpull as an anal­ogy for how the brain could send us phan­tom mis­in­for­ma­tion; but dis­misses it, say­ing, “there is no way the brain as illu­sion­ist could store and ma­nipu­late enough false in­for­ma­tion to fool an in­quiring mind.” Sigh. Now not only con­scious­ness, but also dreams, are im­pos­si­ble. How­ever, D then comes back to dreams, and is aware they ex­ist and are hal­lu­ci­na­tions; so ei­ther he or I is mi­s­un­der­stand­ing this sec­tion.

On p. 12 he sug­gests some­thing in­ter­est­ing: Per­cep­tion is driven both bot­tom-up (from the senses) and top-down (from our ex­pec­ta­tions). A hal­lu­ci­na­tion could hap­pen when the bot­tom-up chan­nel is cut off. D doesn’t get into data com­pres­sion at all, but I think a bet­ter way to phrase this is that, given ar­bi­trary bot­tom-up data, the mind can de­com­press sen­sory in­put into the most likely in­ter­pre­ta­tion given the data and given its knowl­edge about the world. In­ter­nally, we should ex­pect that high-band­width sen­sory data is sum­ma­rized some­where in a com­pressed form. Com­pressed data nec­es­sar­ily looks more ran­dom than prior to com­pres­sion. This means that, some­where in­side the mind, we should ex­pect it to be harder than naive in­tro­spec­tion sug­gests to dis­t­in­guish be­tween true sen­sory data and ran­dom sen­sory noise. D sug­gests an im­por­tant role for an ad­justable sen­si­tivity thresh­old for ac­cept­ing/​re­ject­ing sug­gested in­ter­pre­ta­tions of sense data.

D dis­misses Freud’s ideas about dreams—that they are sto­ries about our cur­rent con­cerns, hid­den un­der sym­bol­ism in or­der to sneak past our in­ter­nal cen­sors—by ob­serv­ing that we should not posit ho­mun­culi in­side our brains who are smarter than we are.

[In sum­mary, this chap­ter con­tained some bone-headed howlers, and some in­ter­est­ing things; but on the whole, it makes me doubt that D is go­ing to ad­dress the prob­lem of con­scious­ness. He seems, in­stead, on a tra­jec­tory to try to ex­plain how a brain can pro­duce in­tel­li­gent ac­tion. It sounds like he plans to talk about the ar­chi­tec­ture of hu­man in­tel­li­gence, al­though he does promise to ad­dress qualia in part III.

Re­peat­edly on LW, I’ve seen one per­son (fre­quently Mitchell Porter) raise the prob­lem of qualia; and seen oth­er­wise-in­tel­li­gent peo­ple re­ply by say­ing sci­ence has got it cov­ered, con­scious­ness is a prop­erty of phys­i­cal sys­tems, noth­ing to worry about. For some rea­son, a lot of very bright peo­ple can­not see that con­scious­ness is a big, strange prob­lem. Not in­tel­li­gence, not even as­sign­ing mean­ing to rep­re­sen­ta­tions, but con­scious­ness. It is a differ­ent prob­lem. (A com­plete ex­pla­na­tion of how in­tel­li­gence and sym­bol-ground­ing take place in hu­mans might con­comi­tantly ex­plain con­scious­ness; it does not fol­low, as most peo­ple seem to think it does, that demon­strat­ing a way to ac­count for non-hu­man in­tel­li­gence and sym­bol-ground­ing there­fore ac­counts for con­scious­ness.)

Part of the prob­lem is their the­is­tic op­po­nents, who hope­lessly mud­dle in­tel­li­gence, con­scious­ness, and re­li­gion: “A com­puter can never write a sym­phony. There­fore con­scious­ness is meta­phys­i­cal; there­fore I have a soul; there­fore there is life af­ter-death.” I think this line of rea­son­ing has been pre­sented to us all so of­ten that a lot of us have cached it, to the ex­tent that it in­jects it­self into our own rea­son­ing. Peo­ple on LW who try to elu­ci­date the prob­lem of qualia in­evitably get dis­missed as quasi-the­ists, be­cause, his­tor­i­cally, all of the peo­ple say­ing things that sound similar were the­ists.

At this point, I sus­pect that Den­nett has con­tributed to this con­fu­sion, by writ­ing a book about in­tel­li­gence and claiming not just that it’s about con­scious­ness, but that it has solved the prob­lem. I shall see.]