Dissolving the Question

“If a tree falls in the for­est, but no one hears it, does it make a sound?”

I didn’t an­swer that ques­tion. I didn’t pick a po­si­tion, “Yes!” or “No!“, and defend it. In­stead I went off and de­con­structed the hu­man al­gorithm for pro­cess­ing words, even go­ing so far as to sketch an illus­tra­tion of a neu­ral net­work. At the end, I hope, there was no ques­tion left—not even the feel­ing of a ques­tion.

Many philoso­phers—par­tic­u­larly am­a­teur philoso­phers, and an­cient philoso­phers—share a dan­ger­ous in­stinct: If you give them a ques­tion, they try to an­swer it.

Like, say, “Do we have free will?”

The dan­ger­ous in­stinct of philos­o­phy is to mar­shal the ar­gu­ments in fa­vor, and mar­shal the ar­gu­ments against, and weigh them up, and pub­lish them in a pres­ti­gious jour­nal of philos­o­phy, and so fi­nally con­clude: “Yes, we must have free will,” or “No, we can­not pos­si­bly have free will.”

Some philoso­phers are wise enough to re­call the warn­ing that most philo­soph­i­cal dis­putes are re­ally dis­putes over the mean­ing of a word, or con­fu­sions gen­er­ated by us­ing differ­ent mean­ings for the same word in differ­ent places. So they try to define very pre­cisely what they mean by “free will”, and then ask again, “Do we have free will? Yes or no?”

A philoso­pher wiser yet, may sus­pect that the con­fu­sion about “free will” shows the no­tion it­self is flawed. So they pur­sue the Tra­di­tional Ra­tion­al­ist course: They ar­gue that “free will” is in­her­ently self-con­tra­dic­tory, or mean­ingless be­cause it has no testable con­se­quences. And then they pub­lish these dev­as­tat­ing ob­ser­va­tions in a pres­ti­gious philos­o­phy jour­nal.

But prov­ing that you are con­fused may not make you feel any less con­fused. Prov­ing that a ques­tion is mean­ingless may not help you any more than an­swer­ing it.

The philoso­pher’s in­stinct is to find the most defen­si­ble po­si­tion, pub­lish it, and move on. But the “naive” view, the in­stinc­tive view, is a fact about hu­man psy­chol­ogy. You can prove that free will is im­pos­si­ble un­til the Sun goes cold, but this leaves an un­ex­plained fact of cog­ni­tive sci­ence: If free will doesn’t ex­ist, what goes on in­side the head of a hu­man be­ing who thinks it does? This is not a rhetor­i­cal ques­tion!

It is a fact about hu­man psy­chol­ogy that peo­ple think they have free will. Find­ing a more defen­si­ble philo­soph­i­cal po­si­tion doesn’t change, or ex­plain, that psy­cholog­i­cal fact. Philos­o­phy may lead you to re­ject the con­cept, but re­ject­ing a con­cept is not the same as un­der­stand­ing the cog­ni­tive al­gorithms be­hind it.

You could look at the Stan­dard Dis­pute over “If a tree falls in the for­est, and no one hears it, does it make a sound?“, and you could do the Tra­di­tional Ra­tion­al­ist thing: Ob­serve that the two don’t dis­agree on any point of an­ti­ci­pated ex­pe­rience, and triumphantly de­clare the ar­gu­ment pointless. That hap­pens to be cor­rect in this par­tic­u­lar case; but, as a ques­tion of cog­ni­tive sci­ence, why did the ar­guers make that mis­take in the first place?

The key idea of the heuris­tics and bi­ases pro­gram is that the mis­takes we make, of­ten re­veal far more about our un­der­ly­ing cog­ni­tive al­gorithms than our cor­rect an­swers. So (I asked my­self, once upon a time) what kind of mind de­sign cor­re­sponds to the mis­take of ar­gu­ing about trees fal­ling in de­serted forests?

The cog­ni­tive al­gorithms we use, are the way the world feels. And these cog­ni­tive al­gorithms may not have a one-to-one cor­re­spon­dence with re­al­ity—not even macro­scopic re­al­ity, to say noth­ing of the true quarks. There can be things in the mind that cut skew to the world.

For ex­am­ple, there can be a dan­gling unit in the cen­ter of a neu­ral net­work, which does not cor­re­spond to any real thing, or any real prop­erty of any real thing, ex­is­tent any­where in the real world. This dan­gling unit is of­ten use­ful as a short­cut in com­pu­ta­tion, which is why we have them. (Me­taphor­i­cally speak­ing. Hu­man neu­ro­biol­ogy is surely far more com­plex.)

This dan­gling unit feels like an un­re­solved ques­tion, even af­ter ev­ery an­swer­able query is an­swered. No mat­ter how much any­one proves to you that no differ­ence of an­ti­ci­pated ex­pe­rience de­pends on the ques­tion, you’re left won­der­ing: “But does the fal­ling tree re­ally make a sound, or not?”

But once you un­der­stand in de­tail how your brain gen­er­ates the feel­ing of the ques­tion—once you re­al­ize that your feel­ing of an unan­swered ques­tion, cor­re­sponds to an illu­sory cen­tral unit want­ing to know whether it should fire, even af­ter all the edge units are clamped at known val­ues—or bet­ter yet, you un­der­stand the tech­ni­cal work­ings of Naive Bayesthen you’re done. Then there’s no lin­ger­ing feel­ing of con­fu­sion, no vague sense of dis­satis­fac­tion.

If there is any lin­ger­ing feel­ing of a re­main­ing unan­swered ques­tion, or of hav­ing been fast-talked into some­thing, then this is a sign that you have not dis­solved the ques­tion. A vague dis­satis­fac­tion should be as much warn­ing as a shout. Really dis­solv­ing the ques­tion doesn’t leave any­thing be­hind.

A triumphant thun­der­ing re­fu­ta­tion of free will, an ab­solutely unar­guable proof that free will can­not ex­ist, feels very satis­fy­ing—a grand cheer for the home team. And so you may not no­tice that—as a point of cog­ni­tive sci­ence—you do not have a full and satis­fac­tory de­scrip­tive ex­pla­na­tion of how each in­tu­itive sen­sa­tion arises, point by point.

You may not even want to ad­mit your ig­no­rance, of this point of cog­ni­tive sci­ence, be­cause that would feel like a score against Your Team. In the midst of smash­ing all fool­ish be­liefs of free will, it would seem like a con­ces­sion to the op­pos­ing side to con­cede that you’ve left any­thing un­ex­plained.

And so, per­haps, you’ll come up with a just-so evolu­tion­ary-psy­cholog­i­cal ar­gu­ment that hunter-gath­er­ers who be­lieved in free will, were more likely to take a pos­i­tive out­look on life, and so out­re­pro­duce other hunter-gath­er­ers—to give one ex­am­ple of a com­pletely bo­gus ex­pla­na­tion. If you say this, you are ar­gu­ing that the brain gen­er­ates an illu­sion of free will—but you are not ex­plain­ing how. You are try­ing to dis­miss the op­po­si­tion by de­con­struct­ing its mo­tives—but in the story you tell, the illu­sion of free will is a brute fact. You have not taken the illu­sion apart to see the wheels and gears.

Imag­ine that in the Stan­dard Dis­pute about a tree fal­ling in a de­serted for­est, you first prove that no differ­ence of an­ti­ci­pa­tion ex­ists, and then go on to hy­poth­e­size, “But per­haps peo­ple who said that ar­gu­ments were mean­ingless were viewed as hav­ing con­ceded, and so lost so­cial sta­tus, so now we have an in­stinct to ar­gue about the mean­ings of words.” That’s ar­gu­ing that or ex­plain­ing why a con­fu­sion ex­ists. Now look at the neu­ral net­work struc­ture in Feel the Mean­ing. That’s ex­plain­ing how, dis­assem­bling the con­fu­sion into smaller pieces which are not them­selves con­fus­ing. See the differ­ence?

Com­ing up with good hy­pothe­ses about cog­ni­tive al­gorithms (or even hy­pothe­ses that hold to­gether for half a sec­ond) is a good deal harder than just re­fut­ing a philo­soph­i­cal con­fu­sion. In­deed, it is an en­tirely differ­ent art. Bear this in mind, and you should feel less em­bar­rassed to say, “I know that what you say can’t pos­si­bly be true, and I can prove it. But I can­not write out a flowchart which shows how your brain makes the mis­take, so I’m not done yet, and will con­tinue in­ves­ti­gat­ing.”

I say all this, be­cause it some­times seems to me that at least 20% of the real-world effec­tive­ness of a skil­led ra­tio­nal­ist comes from not stop­ping too early. If you keep ask­ing ques­tions, you’ll get to your des­ti­na­tion even­tu­ally. If you de­cide too early that you’ve found an an­swer, you won’t.

The challenge, above all, is to no­tice when you are con­fused—even if it just feels like a lit­tle tiny bit of con­fu­sion—and even if there’s some­one stand­ing across from you, in­sist­ing that hu­mans have free will, and smirk­ing at you, and the fact that you don’t know ex­actly how the cog­ni­tive al­gorithms work, has noth­ing to do with the sear­ing folly of their po­si­tion...

But when you can lay out the cog­ni­tive al­gorithm in suffi­cient de­tail that you can walk through the thought pro­cess, step by step, and de­scribe how each in­tu­itive per­cep­tion arises—de­com­pose the con­fu­sion into smaller pieces not them­selves con­fus­ing—then you’re done.

So be warned that you may be­lieve you’re done, when all you have is a mere triumphant re­fu­ta­tion of a mis­take.

But when you’re re­ally done, you’ll know you’re done. Dis­solv­ing the ques­tion is an un­mis­tak­able feel­ing—once you ex­pe­rience it, and, hav­ing ex­pe­rienced it, re­solve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.

Which is to say: When you’re done, you’ll know you’re done, but un­for­tu­nately the re­verse im­pli­ca­tion does not hold.

So here’s your home­work prob­lem: What kind of cog­ni­tive al­gorithm, as felt from the in­side, would gen­er­ate the ob­served de­bate about “free will”?

Your as­sign­ment is not to ar­gue about whether peo­ple have free will, or not.

Your as­sign­ment is not to ar­gue that free will is com­pat­i­ble with de­ter­minism, or not.

Your as­sign­ment is not to ar­gue that the ques­tion is ill-posed, or that the con­cept is self-con­tra­dic­tory, or that it has no testable con­se­quences.

You are not asked to in­vent an evolu­tion­ary ex­pla­na­tion of how peo­ple who be­lieved in free will would have re­pro­duced; nor an ac­count of how the con­cept of free will seems sus­pi­ciously con­gru­ent with bias X. Such are mere at­tempts to ex­plain why peo­ple be­lieve in “free will”, not ex­plain how.

Your home­work as­sign­ment is to write a stack trace of the in­ter­nal al­gorithms of the hu­man mind as they pro­duce the in­tu­itions that power the whole damn philo­soph­i­cal ar­gu­ment.

This is one of the first real challenges I tried as an as­piring ra­tio­nal­ist, once upon a time. One of the eas­ier co­nun­drums, rel­a­tively speak­ing. May it serve you like­wise.