Sam Harris and the Is–Ought Gap

Many sec­u­lar ma­te­ri­al­ist are puz­zled by Sam Har­ris’s fre­quent as­ser­tion that sci­ence can bridge Hume’s is–ought gap. In­deed, baf­fle­ment abounds on both sides when­ever he de­bates his “bridge” with other ma­te­ri­al­ists. Both sides are un­able to un­der­stand how the other can fail to grasp el­e­men­tary and un­de­ni­able points. This pod­cast con­ver­sa­tion with the physi­cist Sean Car­roll pro­vides a vivid yet am­i­ca­ble demon­stra­tion.

I be­lieve that this mu­tual con­fu­sion is a con­se­quence of two dis­tinct but un­spo­ken ways of think­ing about ideal­ized moral ar­gu­men­ta­tion. I’ll call these two ways log­i­cal and di­alec­ti­cal.

Roughly, log­i­cal ar­gu­men­ta­tion is fo­cused on log­i­cal proofs of state­ments. Dialec­ti­cal ar­gu­men­ta­tion is geared to­wards ra­tio­nal per­sua­sion of agents. Th­ese two differ­ent ap­proaches lead to very differ­ent con­clu­sions about what kinds of state­ments are nec­es­sary in rigor­ous moral ar­gu­ments. In par­tic­u­lar, the is–ought gap is un­avoid­able when you take the log­i­cal point of view. But this gap evap­o­rates when you take the di­alec­ti­cal point of view.[1]

I won’t be ar­gu­ing for one of these views over the other. My goal is rather to dis­solve dis­agree­ment. I be­lieve that prop­erly un­der­stand­ing these two views will ren­der a lot of ar­gu­ments un­nec­es­sary.

Log­i­cal moral argumentation

Log­i­cal ar­gu­men­ta­tion, in the sense in which I’m us­ing the term here, is fo­cused on find­ing rigor­ous log­i­cal proofs of moral state­ments. The rea­son­ing pro­ceeds by log­i­cal in­fer­ence from premises to con­clu­sion. The ideal model is some­thing like a the­ory in math­e­mat­i­cal logic, with all con­clu­sions proved from a ba­sic set of ax­ioms us­ing just the rules of logic.

Peo­ple who un­der­take moral ar­gu­men­ta­tion with this ideal in mind en­vi­sion a the­ory that can ex­press “is” state­ments, but which also con­tains an “ought” sym­bol. Un­der suit­able cir­cum­stances, the the­ory proves “is” state­ments like “You are pul­ling the switch that di­verts the trol­ley,” and “If you pull the switch, the trol­ley will be di­verted.” But what makes the the­ory moral is that it can also prove “ought” state­ments like “You ought to pull the switch that di­verts the trol­ley.“[2]

Now, this “ought” sym­bol could ap­pear in the ideal for­mal the­ory in one of only two ways: Either the “ought” sym­bol is an un­defined sym­bol ap­pear­ing among the ax­ioms, or the “ought” sym­bol is sub­se­quently defined in terms of the more-prim­i­tive “is” sym­bols used to ex­press the ax­ioms.[3]

When Har­ris claims to be able to bridge the is–ought gap in purely sci­en­tific terms, many listen­ers think that he’s claiming to do so from this “log­i­cal ar­gu­men­ta­tion” point of view. In that case, such a bridge would be suc­cess­ful only if ev­ery pos­si­ble sci­en­tifi­cally com­pe­tent agent would ac­cept the ax­ioms of the the­ory used. In par­tic­u­lar, “ac­cept­ing” a state­ment that in­cludes the “ought” sym­bol would mean some­thing like “ac­tu­ally be­ing mo­ti­vated to do what the state­ment says that one ‘ought’ to do, at least in the limit of ideal re­flec­tion”.

But, on these terms, the is–ought gap is un­avoid­able: No moral the­ory can be purely sci­en­tific in this sense. For, how­ever “ought” is defined by a par­tic­u­lar se­quence of “is” sym­bols, there is always a pos­si­ble sci­en­tifi­cally com­pe­tent agent who is not mo­ti­vated by “ought” so defined.[4]

Thus, from this point of view, no moral the­ory can bridge the is–ought gap by sci­en­tific means alone. Mo­ral ar­gu­men­ta­tion must always in­clude an “ought” sym­bol, but the use of this sym­bol can­not be jus­tified on purely sci­en­tific grounds. This doesn’t mean that moral ar­gu­ments can’t be suc­cess­ful at all. It doesn’t even mean that they can’t be ob­jec­tively right or wrong. But it does mean that their jus­tifi­ca­tion must rest on premises that go be­yond the purely sci­en­tific.

This is Sean Car­roll’s point of view in the pod­cast con­ver­sa­tion with Har­ris linked above. But Har­ris, I claim, could not un­der­stand Car­roll’s ar­gu­ment, and Car­roll in turn could not un­der­stand Har­ris’s, be­cause Har­ris is com­ing from the di­alec­ti­cal point of view.

Dialec­ti­cal moral argumentation

Dialec­ti­cal moral ar­gu­men­ta­tion is not mod­eled on log­i­cal proof. Rather, it is mod­eled on ra­tio­nal per­sua­sion. The ideal con­text en­vi­sioned here is a con­ver­sa­tion be­tween ra­tio­nal agents in which one of the agents is per­suad­ing the other to do some­thing. The per­suader pro­ceeds from as­ser­tion to as­ser­tion un­til the listener is per­suaded to act.[5]

But here is the point: Such ar­gu­ments shouldn’t in­clude an “ought” sym­bol at all!—At least, not ideally.

By way of anal­ogy, sup­pose that you’re try­ing to con­vince me to eat some ice cream. (This is not a moral ar­gu­ment, which is why this is only an anal­ogy.) Then ob­vi­ously you can’t use “You should eat ice cream” as an ax­iom, be­cause that would be cir­cu­lar. But, more to the point, you wouldn’t even have to use that state­ment in the course of your ar­gu­ment. In­stead, ideally, your ar­gu­ment would just be a bunch of “is” facts about the ice cream (cold, creamy, sweet, and so on). If the ice cream has choco­late chips, and you know that I like choco­late chips, you will tell me facts about the choco­late chips (high in quan­tity and qual­ity, etc.). But there’s no need to add, “And you should eat choco­late chips.”

In­stead, you will just give me all of those “is” facts about the ice cream, maybe draw some “is” im­pli­ca­tions, and then rely on my in­ter­nal mo­ti­va­tional drives to find those facts com­pel­ling. If the “is” facts alone aren’t mo­ti­vat­ing me, then some­thing has gone wrong with the con­ver­sa­tion. Either you as the per­suader failed to pick facts that will mo­ti­vate me, or I as the listener failed to un­der­stand prop­erly the facts that you picked.

Now, prac­ti­cally speak­ing, when you at­tempt to per­suade me to X, you might find it helpful to say things like “You ought to X”. But, ideally, this us­age of “ought” should serve just as a sort of sign­post to help me to fol­low the ar­gu­ment, not as an es­sen­tial part of the ar­gu­ment it­self. Nonethe­less, you might use “ought” as a fram­ing de­vice: “I’m about to con­vince you that you ought to X.” Or: “Re­mem­ber, I already con­vinced you that you ought to X. Now I’m go­ing to con­vince you that do­ing X re­quires do­ing Y.”

But an ideal ar­gu­ment wouldn’t need any such sign­posts. You would just con­vince me of cer­tain facts about the world, and then you’d leave it to my in­ter­nal mo­ti­va­tional drives to do the rest—to in­duce me to act as you de­sired on the ba­sis of the facts that you showed me.

Put an­other way, if you’re try­ing to per­suade me to X, then you shouldn’t have to tell me ex­plic­itly that do­ing X would be good. If you have to say that, then the “is” facts about X must not ac­tu­ally be mo­ti­vat­ing to me. But, in that case, just tel­ling me that do­ing X would be good isn’t go­ing to con­vince me, so your ar­gu­ment has failed.

Like­wise, if the state­ment “Do­ing X would cause Y” isn’t already mo­ti­vat­ing, then the state­ment “Do­ing X would cause Y, and Y would be good” shouldn’t be mo­ti­vat­ing ei­ther, at least not ideally. If you’re do­ing your job right, you’ve already picked an is-state­ment Y that mo­ti­vates me di­rectly, or which en­tails an­other is-state­ment that will mo­ti­vate me di­rectly. So adding “and Y would be good” shouldn’t be tel­ling me any­thing use­ful. It would be at best a rhetor­i­cal flour­ish, and so not a part of ideal ar­gu­men­ta­tion.

Thus, from this point of view, there re­ally is a sense in which “ought” re­duces to “is”. The is–ought gap van­ishes! Wher­ever “ought” ap­pears, it will be found, on closer in­spec­tion, to be un­nec­es­sary. All of the ra­tio­nal work in the ar­gu­ment is done purely by “is”. Of course, cru­cial work is also done by my in­ter­nal mo­ti­va­tional struc­ture. Without that, your “is” state­ments couldn’t have the de­sired effect. But that struc­ture isn’t part of your ar­gu­ment. In the ar­gu­ment it­self, it’s all just “is… is… is...“, all the way down.[6]

This, I take it, is Sam Har­ris’s im­plicit point of view. Or, at least, it should be.


Footnotes

[1] I am not say­ing that these views of moral ar­gu­men­ta­tion ex­haust all of the pos­si­bil­ities. I’m not even say­ing that they are mu­tu­ally ex­clu­sive in prac­tice. Nor­mally, peo­ple slide among such views as the needs of the situ­a­tion re­quire. But I think that some peo­ple tend to get need­lessly locked into one view when they try to think ab­stractly about what counts as a valid and rigor­ous moral ar­gu­ment.

[2] From the log­i­cal point of view, all moral ar­gu­men­ta­tion is first and fore­most about the as­ser­tions that we can prove. Ar­gu­men­ta­tion is not di­rectly about ac­tion. There is only an in­di­rect con­nec­tion to ac­tion in the sense that ar­gu­ments can prove as­ser­tions about ac­tions, like “X ought to be done”. Fur­ther­more, this “ought” must be ex­plicit. Other­wise, you’re just prov­ing “is” state­ments.

[3] Analo­gously, you can get the sym­bol in a the­ory of ar­ith­metic in two ways. On the one hand, in first-or­der Peano ar­ith­metic, the sym­bol is un­defined, but it ap­pears in the ax­ioms, which gov­ern its be­hav­ior. On the other hand, in the origi­nal sec­ond-or­der Peano ax­ioms, there was no sym­bol. In­stead, there was only a suc­ces­sor-of sym­bol. But one may sub­se­quently in­tro­duce by defin­ing it in terms of the suc­ces­sor-of sym­bol us­ing sec­ond-or­der logic.

[4] Some might dis­pute this, es­pe­cially re­gard­ing the mean­ing of the phrase “pos­si­ble sci­en­tifi­cally com­pe­tent agent”. But this is not the crux of the dis­agree­ment that I’m try­ing to dis­solve. [ETA: See this com­ment.]

[5] Here I mean “per­suaded” in the sense of “choos­ing to act out of a sense of moral con­vic­tion”, rather than out of con­sid­er­a­tions of taste or what­ever.

[6] ETA: The phrases “in­ter­nal mo­ti­va­tional drives” and “in­ter­nal mo­ti­va­tional struc­ture” do not re­fer to state­ments, for ex­am­ple to state­ments about what is good that I hap­pen to be­lieve. Those phrases re­fer in­stead to how I act upon be­liefs, to the ways in which differ­ent be­liefs have differ­ent mo­ti­va­tional effects on me.

The point is: This un­spo­ken “in­ter­nal” work is not be­ing done by still more state­ments, and cer­tainly not by “ought” state­ments. Rather, it’s be­ing done by the man­ner in which I am con­sti­tuted so as to do par­tic­u­lar things once I ac­cept cer­tain state­ments.

Eliezer Yud­kowsky dis­cussed this dis­tinc­tion at greater length in Created Already In Mo­tion, where he con­trasts “data” and “dy­nam­ics”. (Thanks to dxu for mak­ing this con­nec­tion.)