Realism and Rationality

For­mat warn­ing: This post has some­how ended up con­sist­ing pri­mar­ily of sub­stan­tive end­notes. It should be fine to read just the (short) main body with­out look­ing at any of the end­notes, though. The end­notes elab­o­rate on var­i­ous claims and dis­tinc­tions and also in­clude a much longer dis­cus­sion of de­ci­sion the­ory.

Thank you to Pablo Staffor­ini, Phil Tram­mell, Jo­hannes Treut­lein, and Max Daniel for com­ments on an ini­tial draft. I have also slightly ed­ited the post since I first pub­lished it, to try to make a few points clearer.

When dis­cussing nor­ma­tive ques­tions, it is not un­com­mon for mem­bers of the ra­tio­nal­ist com­mu­nity to iden­tify as anti-re­al­ists. But nor­ma­tive anti-re­al­ism seems to me to be in ten­sion with some of the com­mu­nity’s core in­ter­ests, po­si­tions, and re­search ac­tivi­ties. In this post I sug­gest that the cost of re­ject­ing re­al­ism may be larger than is some­times rec­og­nized. [1]

1. Real­ism and Anti-Realism

Every­one is, at least some­times, in­clined to ask: “What should I do?”

We ask this ques­tion when we’re mak­ing a de­ci­sion and it seems like there are differ­ent con­sid­er­a­tions to be weighed up. You might be con­sid­er­ing tak­ing a new job in a new city, for ex­am­ple, and find your­self won­der­ing how to bal­ance your prefer­ences with those of your sig­nifi­cant other. You might also find your­self think­ing about whether you have any obli­ga­tion to do im­pact­ful work, about whether it’s bet­ter to play it safe or take risks, about whether it’s bet­ter to be happy in the mo­ment or to be able to look back with satis­fac­tion, and so on. It’s al­most in­evitable that in a situ­a­tion like this you will find your­self ask­ing “What should I do?” and rea­son­ing about it as though the ques­tion has an an­swer you can ap­proach through a cer­tain kind of di­rected thought.[2]

But it’s also con­ceiv­able that this sort of ques­tion doesn’t ac­tu­ally have an an­swer. Very roughly, at least to cer­tain philoso­phers, re­al­ism is a name for the view that there are some things that we should do or think. Anti-re­al­ism is a name for the view that there are not.[3][4][5][6]

2. Anti-Real­ism and the Ra­tion­al­ity Community

In dis­cus­sions of nor­ma­tive is­sues, it seems not un­com­mon for mem­bers of the ra­tio­nal­ist com­mu­nity to iden­tify as “anti-re­al­ists.” Since peo­ple in differ­ent com­mu­ni­ties can ob­vi­ously use the same words to mean differ­ent things, I don’t know what frac­tion of ra­tio­nal­ists have the same thing in mind when they use the term “anti-re­al­ism.”

To the ex­tent peo­ple do have the same thing in mind, though, I find anti-re­al­ism hard to square with a lot of other views and lines of re­search that are pop­u­lar within the com­mu­nity. A few main points of ten­sion stand out to me.

2.1 Nor­ma­tive Uncertainty

One first point of ten­sion is the com­mu­nity’s rel­a­tively strong in­ter­est in the sub­ject of nor­ma­tive un­cer­tainty. At least as it’s nor­mally dis­cussed in the philos­o­phy liter­a­ture, nor­ma­tive un­cer­tainty is un­cer­tainty about nor­ma­tive facts that bear on what we should do. If we as­sume that anti-re­al­ism is true, though, then we are as­sum­ing that there are no such facts. It seems to me like a com­mit­ted anti-re­al­ist could not be in a state of nor­ma­tive un­cer­tainty.

It may still be the case, as Sepielli (2012) sug­gests, that a com­mit­ted anti-re­al­ist can ex­pe­rience psy­cholog­i­cal states that are in­ter­est­ingly struc­turally analo­gous to states of nor­ma­tive un­cer­tainty. How­ever, Bykvist and Ol­son (2012) dis­agree (in my view) fairly force­fully, and Sepielli is in any case clear that: “Strictly speak­ing, there can­not be such a thing as nor­ma­tive un­cer­tainty if non-cog­ni­tivism [the dom­i­nant form of anti-re­al­ism] is true.”[7]

2.2 Strongly En­dorsed Nor­ma­tive Views

A sec­ond point of ten­sion is the ex­is­tence of a key set of nor­ma­tive claims that a large por­tion of the com­mu­nity seems to treat as true.

One of these nor­ma­tive claims is the Bayesian claim that we ought to have de­grees of be­lief in propo­si­tions that are con­sis­tent with the Kol­mogorov prob­a­bil­ity ax­ioms and that are up­dated in ac­cor­dance with Bayes’ rule. It seems to me like very large por­tions of the com­mu­nity self-iden­tify as Bayesi­ans and re­gard other ways of as­sign­ing and up­dat­ing de­grees of be­lief in propo­si­tions as not just differ­ent but in­cor­rect.

Another of these nor­ma­tive claim is the sub­jec­tivist claim that we should do what­ever would best fulfill some ver­sion of our cur­rent prefer­ences. To learn what we should do, on this view, the main thing is to in­tro­spect about our own prefer­ences.[8] Whether or not a given per­son should com­mit a vi­o­lent crime, for in­stance, de­pends purely on whether they want to com­mit the crime (or per­haps on whether they would want to com­mit it if they went through some par­tic­u­lar pro­cess of re­flec­tion).

A fur­ther elab­o­ra­tion on this claim is that, when we are un­cer­tain about the out­comes of our ac­tions, we should more speci­fi­cally act to max­i­mize the ex­pected fulfill­ment of our de­sires. We should con­sider the differ­ent pos­si­ble out­comes of each ac­tion, as­sign them prob­a­bil­ities, as­sign them de­sir­a­bil­ity rat­ings, and then use the ex­pected value for­mula to rate the over­all good­ness of the ac­tion. Whichever ac­tion has the best over­all rat­ing is the one we should take.

One pos­si­ble way of squar­ing an en­dorse­ment of anti-re­al­ism with an ap­par­ent en­dorse­ment of these nor­ma­tive claims is to ar­gue that peo­ple don’t ac­tu­ally have nor­ma­tive claims in mind when they write and talk about these is­sues. Non-cog­ni­tivists—a par­tic­u­lar va­ri­ety of anti-re­al­ists—ar­gue that many ut­ter­ances that seem at first glance like claims about nor­ma­tive facts are in fact noth­ing more than ex­pres­sions of at­ti­tudes. For in­stance, an emo­tivist—a fur­ther sub-va­ri­ety of non-cog­ni­tivist—might ar­gue that the sen­tence “You should max­i­mize the ex­pected fulfill­ment of your cur­rent de­sires!” is sim­ply a way of ex­press­ing a sense of fond­ness to­ward this course of ac­tion. The sen­tence might be cached out as be­ing es­sen­tially equiv­a­lent in con­tent to the sen­tence, “Hur­rah, max­i­miz­ing the ex­pected fulfill­ment of your cur­rent de­sires!”

Although a size­able por­tion of philoso­phers are non-cog­ni­tivists, I gen­er­ally don’t find it very plau­si­ble as a the­ory of what peo­ple are try­ing to do when they seem to make nor­ma­tive claims.[9] In this case it doesn’t feel to me like most mem­bers of the ra­tio­nal­ist com­mu­nity are just try­ing to de­scribe one par­tic­u­lar way of think­ing and act­ing, which they hap­pen to pre­fer to oth­ers. It seems to me, rather, that peo­ple of­ten talk about up­dat­ing your cre­dences in ac­cor­dance with Bayes’ rule and max­i­miz­ing the ex­pected fulfill­ment of your cur­rent de­sires as the cor­rect things to do.

One more thing that stands out to me is that ar­gu­ments for anti-re­al­ism of­ten seem to be pre­sented as though they im­plied (rather than negated) the truth of some of these nor­ma­tive claims. For ex­am­ple, the pop­u­lar “Re­plac­ing Guilt” se­quence on Mind­ing Our Way seems to me to re­peat­edly at­tack nor­ma­tive re­al­ism. It re­jects the idea of “shoulds” and points out that there aren’t “any ought­thor­i­ties to or­dain what is right and what is wrong.” But then it seems to draw nor­ma­tive im­pli­ca­tions out of these at­tacks: among other im­pli­ca­tions, you should “just do what you want.” At least taken at face value, this line of reaon­ing wouldn’t be valid. It makes no more sense than reaon­ing that, if there are no facts about what we should do, then we should “just max­i­mize to­tal he­do­nis­tic well-be­ing” or “just do the op­po­site of what we want” or “just open up sou­venir shops.” Of course, though, there’s a good chance that I’m mi­s­un­der­stand­ing some­thing here.

2.3 De­ci­sion The­ory Research

A third point of ten­sion is the com­mu­nity’s en­gage­ment with nor­ma­tive de­ci­sion the­ory re­search. Differ­ent nor­ma­tive de­ci­sion the­o­ries pick out differ­ent nec­es­sary con­di­tions for an ac­tion to be the one that a given per­son should take, with a fo­cus on how one should re­spond to un­cer­tainty (rather than on what ends one should pur­sue).[10][11]

A typ­i­cal ver­sion of CDT says that the ac­tion you should take at a par­tic­u­lar point in time is the one that would cause the largest ex­pected in­crease in value (un­der some par­tic­u­lar frame­work for eval­u­at­ing cau­sa­tion). A typ­i­cal ver­sion of EDT says that the ac­tion you should take at a par­tic­u­lar point in time is the one that would, once you take it, al­low you to ra­tio­nally ex­pect the most value. There are also al­ter­na­tive ver­sions of these the­o­ries—for in­stance, ver­sions us­ing risk-weighted ex­pected value max­i­miza­tion or the crite­rion of stochas­tic dom­i­nance—that break from the use of pure ex­pected value.

I’ve pretty fre­quently seen it ar­gued within the com­mu­nity (e.g. in the pa­pers “Cheat­ing Death in Da­m­as­cus” and “Func­tional De­ci­sion The­ory”) that CDT and EDT are not “cor­rect” and that some other new the­ory such as func­tional de­ci­sion the­ory is. But if anti-re­al­ism is true, then no de­ci­sion the­ory is cor­rect.

Eliez­ier Yud­kowsky’s in­fluen­tial early writ­ing on de­ci­sion the­ory seems to me to take an anti-re­al­ist stance. It sug­gests that we can only ask mean­ingful ques­tions about the effects and cor­re­lates of de­ci­sions. For ex­am­ple, in the con­text of the New­comb thought ex­per­i­ment, we can ask whether one-box­ing is cor­re­lated with win­ning more money. But, it sug­gests, we can­not take a step fur­ther and ask what these effects and cor­re­la­tions im­ply about what it is “rea­son­able” for an agent to do (i.e. what they should do). This ques­tion—the one that nor­ma­tive de­ci­sion the­ory re­search, as I un­der­stand it, is gen­er­ally about -- is seem­ingly dis­missed as vac­u­ous.

If this ap­par­ently anti-re­al­ist stance is widely held, then I don’t un­der­stand why the com­mu­nity en­gages so heav­ily with nor­ma­tive de­ci­sion the­ory re­search or why it takes part in dis­cus­sions about which de­ci­sion the­ory is “cor­rect.” It strikes me a bit like an athe­ist en­thus­ti­as­ti­cally fol­low­ing the­olog­i­cal de­bates about which god is the true god. But I’m mostly just con­fused here.[12][13]

3. Sym­pa­thy for Realism

I wouldn’t nec­es­sar­ily de­scribe my­self as a re­al­ist. I get that re­al­ism is a weird po­si­tion. It’s both meta­phys­i­cally and episte­molog­i­cally sus­pi­cious. What is this mys­te­ri­ous prop­erty of “should-ness” that cer­tain ac­tions are meant to pos­sess—and why would our in­tu­itions about which ac­tions pos­sess it be re­li­able?[14][15]

But I am also very sym­pa­thetic to re­al­ism and, in prac­tice, tend to rea­son about nor­ma­tive ques­tions as though I was a full-throated re­al­ist. My sym­pa­thy for re­al­ism and ten­dency to think as a re­al­ist largely stems from my per­cep­tion that if we re­ject re­al­ism and in­ter­nal­ize this re­jec­tion then there’s re­ally not much to be said or thought about any­thing. We can still ex­press at­ti­tudes at one an­other, for ex­am­ple sug­gest­ing that we like cer­tain ac­tions or cre­dences in propo­si­tions bet­ter than oth­ers. We can pre­sent claims about the world, with­out any as­so­ci­ated ex­plicit or im­plicit be­lief that oth­ers should agree with them or re­spond to them in any par­tic­u­lar way. And that seems to be about it.

Fur­ther­more, if anti-re­al­ism is true, then it can’t also be true that we should be­lieve that anti-re­al­ism is true. Belief in anti-re­al­ism seems to un­der­mine it­self. Per­haps be­lief in re­al­ism is self-un­der­min­ing in a similar way—if seem­ingly cor­rect rea­son­ing leads us to ac­count for all the ways in which re­al­ism is a sus­pect po­si­tion—but the nega­tive feed­back loop in this case at least seems to me to be less strong.[16]

I think that re­al­ism war­rants more re­spect than it has his­tor­i­cally re­ceived in the ra­tio­nal­ity com­mu­nity, at least rel­a­tive to the level of re­spect it gets from philoso­phers.[17] I sus­pect that some of this lack of re­spect might come from a rel­a­tively weaker aware­ness of the cost of re­ject­ing re­al­ism or of the way in which be­lief in anti-re­al­ism ap­pears to un­der­mine it­self.


  1. I’m bas­ing my the views I ex­press in this post pri­mar­ily off Derek Parfit’s writ­ing, speci­fi­cally his book On What Mat­ters. For this rea­son, it seems pretty plau­si­ble to me that there are some im­por­tant points I’ve missed by read­ing too nar­rowly. In ad­di­tion, it also seems likely that some of the ways in which I talk about par­tic­u­lar is­sues around nor­ma­tivity will sound a bit for­eign or just gen­er­ally “off” to peo­ple who are highly fa­mil­iar with some of these is­sues. One un­for­tu­nate rea­son for this is that the study of nor­ma­tive ques­tions and of the na­ture of nor­ma­tivity seems to me to be spread out pretty awk­wardly across the field of philos­o­phy, with philoso­phers in differ­ent sub-dis­ci­plines of­ten dis­cussing ap­par­ently in­ter­con­nected ques­tions in sig­nifi­cant iso­la­tion of one an­other while us­ing fairly differ­ent ter­minol­ogy. This means that (e.g.) meta-ethics and de­ci­sion the­ory are sel­dom talked about at the same time and are of­ten talked about in ways that make it difficult to see how they fit to­gether. A ma­jor rea­son I am lean­ing on Parfit’s work is that he is—to my knowl­edge—one of rel­a­tively few philoso­phers to have tried to ap­proach ques­tions around nor­ma­tivity through a sin­gle unified frame­work. ↩︎

  2. This is a point that is also dis­cussed at length in David Enoch’s book Tak­ing Mo­ral­ity Se­ri­ously (pgs. 70-73):

    Per­haps...we are es­sen­tially de­liber­a­tive crea­tures. Per­haps, in other words, we can­not avoid ask­ing our­selves what to do, what to be­lieve, how to rea­son, what to care about. We can, of course, stop de­liber­at­ing about one thing or an­other, and it’s not as if all of us have to be prac­ti­cal philoso­phers (well, if you’re read­ing this book, you prob­a­bly are, but you know what I mean). It’s opt­ing out of the de­liber­a­tive pro­ject as a whole that may not be an op­tion for us….

    [Sup­pose] law school turned out not to be all you thought it would be, and you no longer find the prospects of a ca­reer in law as ex­cit­ing as you once did. For some rea­son you don’t seem to be able to shake off that old ro­man­tic dream of study­ing philos­o­phy. It seems now is the time to make a de­ci­sion. And so, alone, or in the com­pany of some oth­ers you find helpful in such cir­cum­stances, you de­liber­ate. You try to de­cide whether to join a law firm, ap­ply to grad­u­ate school in philos­o­phy, or per­haps do nei­ther.

    The de­ci­sion is of some con­se­quence, and so you re­solve to put some thought into it. You ask your­self such ques­tions as: Will I be happy prac­tic­ing law? Will I be hap­pier do­ing philos­o­phy? What are my chances of be­com­ing a good lawyer? A good philoso­pher? How much money does a rea­son­ably suc­cess­ful lawyer make, and how much less does a rea­son­ably suc­cess­ful philoso­pher make? Am I, so to speak, more of a philoso­pher or more of a lawyer? As a lawyer, will I be able to make a sig­nifi­cant poli­ti­cal differ­ence? How im­por­tant is the poli­ti­cal differ­ence I can rea­son­ably ex­pect to make? How im­por­tant is it to try and make any poli­ti­cal differ­ence? Should I give any weight to my father’s ex­pec­ta­tions, and to the dis­ap­point­ment he will feel if I fail to be­come a lawyer? How strongly do I re­ally want to do philos­o­phy? And so on. Even with an­swers to most – even all – of these ques­tions, there re­mains the ul­ti­mate ques­tion. “All things con­sid­ered”, you ask your­self, “what makes best sense for me to do? When all is said and done, what should I do? What shall I do?”

    When en­gag­ing in this de­liber­a­tion, when ask­ing your­self these ques­tions, you as­sume, so it seems to me, that they have an­swers. Th­ese an­swers may be very vague, al­low for some in­de­ter­mi­nacy, and so on. But at the very least you as­sume that some pos­si­ble an­swers to these ques­tions are bet­ter than oth­ers. You try to find out what the (bet­ter) an­swers to these ques­tions are, and how they in­ter­act so as to an­swer the arch-ques­tion, the one about what it makes most sense for you to do. You are not try­ing to cre­ate these an­swers. Of course, in an ob­vi­ous sense what you will end up do­ing is up to you (or so, at least, both you and I are sup­pos­ing here). And in an­other, less ob­vi­ous sense, per­haps the an­swer to some of these ques­tions is also up to you. Per­haps, for in­stance, how happy prac­tic­ing law will make you is at least partly up to you. But, when try­ing to make up your mind, it doesn’t feel like just try­ing to make an ar­bi­trary choice. This is just not what it is like to de­liber­ate. Rather, it feels like try­ing to make the right choice. It feels like try­ing to find the best solu­tion, or at least a good solu­tion, or at the very least one of the bet­ter solu­tions, to a prob­lem you’re pre­sented with. What you’re try­ing to do, it seems to me, is to make the de­ci­sion it makes most sense for you to make. Mak­ing the de­ci­sion is up to you. But which de­ci­sion is the one it makes most sense for you to make is not. This is some­thing you are try­ing to dis­cover, not cre­ate. Or so, at the very least, it feels like when de­liber­at­ing.

    ↩︎
  3. Speci­fi­cally, the two rele­vant views can be de­scribed as re­al­ism and anti-re­al­ism with re­gard to “nor­ma­tivity.” We can di­vide the do­main of “nor­ma­tivity” up into the do­mains of “prac­ti­cal ra­tio­nal­ity,” which de­scribes what ac­tions peo­ple should take, and “epistemic ra­tio­nal­ity,” which de­scribes which be­liefs or de­grees of be­lief peo­ple should hold. The study of ethics, de­ci­sion-mak­ing un­der un­cer­tainty, and so on can then all be un­der­stood as sub-com­po­nents of the study of prac­ti­cal ra­tio­nal­ity. For ex­am­ple, one view on the study of ethics is that it is the study of how fac­tors other than one’s own prefer­ences might play roles in de­ter­min­ing what ac­tions one should take. It should be noted that ter­minol­ogy varies very widely though. For ex­am­ple, differ­ent au­thors seem to use the word “ethics” more or less in­clu­sively. The term “moral re­al­ism” also some­times means roughly the same thing as “nor­ma­tive re­al­ism,” as I’ve defined it here, and some­times picks out a more spe­cific po­si­tion. ↩︎

  4. An an edit to the ini­tial post, I think it’s prob­a­bly worth say­ing more about the con­cept of “moral re­al­ism” in re­la­tion to “nor­ma­tive re­al­ism.” Depend­ing on the con­text, “moral re­al­ism” might be taken to re­fer to: (a) nor­ma­tive re­al­ism, (b) re­al­ism about prac­ti­cal ra­tio­nal­ity (not just epistemic ra­tio­nal­ity), (c) re­al­ism about prac­ti­cal ra­tio­nal­ity com­bined with the ob­ject-level be­lief that peo­ple should do more than just try to satisfy their own per­sonal prefer­ences, or (d) some­thing else in this di­rec­tion.

    One pos­si­ble rea­son the term lacks a con­sen­sus defi­ni­tion is that, per­haps sur­pris­ingly, many con­tem­po­rary “moral re­al­ists” aren’t ac­tu­ally very pre­ocup­pied with the con­cept of “moral­ity.” Pop­u­lar books like Tak­ing Mo­ral­ity Se­ri­ously, On What Mat­ters, and The Nor­ma­tive Web spend most of their en­ergy defend­ing nor­ma­tive re­al­ism, more broadly, and my im­pres­sion is that their crit­ics spend most of their en­ergy at­tack­ing nor­ma­tive re­al­ism more broadly. One rea­son for this shift in fo­cus to­ward nor­ma­tive re­al­ism is the re­al­iza­tion that, on al­most any con­cep­tion of “moral re­al­ism,” nearly all of the stan­dard meta­phys­i­cal and episte­molog­i­cal ob­jec­tions to “moral re­al­ism” also ap­ply just as well to nor­ma­tive re­al­ism in gen­eral. Another rea­son is that any pos­si­ble dis­tinc­tion be­tween moral and nor­ma­tive-but-not-moral facts doesn’t seem like it could have much prac­ti­cal rele­vance: If we know that we should make some de­ci­sion, then we know that we should take it; we have no ob­vi­ous ad­di­tional need to know or care whether this nor­ma­tive fact war­rants the la­bel “moral fact” or not. Here, for ex­am­ple, is David Enoch, in Tak­ing Mo­ral­ity Se­ri­ously, on the con­cept of moral­ity (pg. 86):

    What more...does it take for a nor­ma­tive truth (or false­hood) to qual­ify as moral? Mo­ral­ity is a par­tic­u­lar in­stance of nor­ma­tivity, and so we are now in effect ask­ing about its dis­tinc­tive char­ac­ter­is­tics, the ones that serve to dis­t­in­guish be­tween the moral and the rest of the nor­ma­tive. I do not have a view on these spe­cial char­ac­ter­is­tics of the moral. In fact, I think that for most pur­poses this is not a line worth wor­ry­ing about. The dis­tinc­tion within the nor­ma­tive be­tween the moral and the non-moral seems to me to be shal­low com­pared to the dis­tinc­tion be­tween the nor­ma­tive and the non-nor­ma­tive—both philo­soph­i­cally, and, as I am about to ar­gue, prac­ti­cally. (Once you know you have a rea­son to X and what this rea­son is, does it re­ally mat­ter for your de­liber­a­tion whether it qual­ifies as a moral rea­son?)

    ↩︎
  5. There are two ma­jor strands of anti-re­al­ism. Er­ror the­ory (some­times equated with “nihilism”) as­serts that all claims that peo­ple should do par­tic­u­lar things or re­frain from do­ing par­tic­u­lar things are false. Non-cog­ni­tivism as­serts that ut­ter­ances of the form “A should do X” typ­i­cally can­not even re­ally be un­der­stood as claims; they’re not the sort of thing that could be true or false. ↩︎

  6. In this post, for sim­plic­ity, I’m talk­ing about nor­ma­tivity us­ing bi­nary lan­guage. Either it’s the case that you “should” take an ac­tion or it’s not the case that you “should” take it. But we might also talk in less bi­nary terms. For ex­am­ple, there may be some ac­tions that you merely have “more rea­son” to take than oth­ers. ↩︎

  7. In Sepielli’s ac­count, for ex­am­ple, the ex­pe­rience of feel­ing ex­tremely in fa­vor of blam­ing some­one a lit­tle bit for tak­ing an ac­tion X is analo­gous to the ex­pe­rience of be­ing ex­tremely con­fi­dent that it is a lit­tle bit wrong to take ac­tion X. This ac­count is open to at least a few ob­jec­tions, such as the ob­jec­tion that de­grees of fa­vor­a­bil­ity don’t—at least at first glance—seem to obey the stan­dard ax­ioms of prob­a­bil­ity the­ory. Even if we do ac­cept the ac­count, though, I still feel un­clear about the proper method and jus­tifi­ca­tion for con­vert­ing de­bates around nor­ma­tive un­cer­tainty into de­bates around these other kinds of psy­cholog­i­cal states. ↩︎

  8. If my mem­ory is cor­rect, one ex­am­ple of a con­text in which I have en­coun­tered this sub­jec­tivist view­point is in a CFAR work­shop. One les­son in­structs at­ten­dees that if it seems like they “should” do some­thing, but then upon re­flec­tion they re­al­ize they don’t want to do it, then it’s not ac­tu­ally true that they should do it. ↩︎

  9. The PhilPapers sur­vey sug­gests that about a quar­ter of both nor­ma­tive ethi­cists and ap­plied ethi­cists also self-iden­tify as anti-re­al­ists, with the ma­jor­ity of them pre­sum­ably lean­ing to­ward non-cog­ni­tivism over er­ror the­ory. It’s still an ac­tive mat­ter of de­bate whether non-cog­ni­tivists have sen­si­ble sto­ries about what peo­ple are try­ing to do when they seem to be dis­cussing nor­ma­tive claims. For ex­am­ple, naive emo­tivist the­o­ries stum­ble in try­ing to ex­plain sen­tences like: “It’s not true that ei­ther you should do X or you should do Y.” ↩︎

  10. There is also non-nor­ma­tive re­search that falls un­der the la­bel “de­ci­sion the­ory,” which fo­cuses on ex­plor­ing the ways in which peo­ple do in prac­tice make de­ci­sions or neu­trally ex­plor­ing the im­pli­ca­tions of differ­ent as­sump­tions about de­ci­sion-mak­ing pro­cesses. ↩︎

  11. Ar­guably, even in aca­demic liter­a­ture, de­ci­sion the­o­ries are of­ten dis­cussed un­der the im­plicit as­sump­tion that some form of sub­jec­tivism is true. How­ever, it is also very easy to mod­ify the the­o­ries to be com­pat­i­ble with the­o­ries that tell you to take into ac­count things be­yond your cur­rent de­sires. Value might be equated with one’s fu­ture welfare, for ex­am­ple, or with the to­tal fu­ture welfare of all con­scious be­ings. ↩︎

  12. One thing that makes this is­sue a bit com­pli­cated is that ra­tio­nal­ist com­mu­nity writ­ing on de­ci­sion the­ory some­times seems to switch back and forth be­tween de­scribing de­ci­sion the­o­ries as nor­ma­tive claims about de­ci­sions (which I be­lieve is how aca­demic philoso­phers typ­i­cally de­scribe de­ci­sion the­o­ries) and as al­gorithms to be used (which seems to be in­con­sis­tent with how aca­demic philoso­phers typ­i­cally de­scribe de­ci­sion the­o­ries). I think this ten­dency to switch back and forth be­tween de­scribing de­ci­sion the­o­ries in these two dis­tinct ways can be seen both in pa­pers propos­ing new de­ci­sion the­o­ries and in on­line dis­cus­sions. I also think this switch­ing ten­dency can make things pretty con­fus­ing. Although it makes sense to dis­cuss how an al­gorithm “performs” when “im­ple­mented,” once we spec­ify a suffi­ciently pre­cise perfor­mance met­ric, it does not seem to me to make sense to dis­cuss the perfor­mance of a nor­ma­tive claim. I think the ten­dency to blur the dis­tinc­tion be­tween al­gorithms and nor­ma­tive claims—or, as Will MacAskill puts it in his re­cent and similar cri­tique, be­tween “de­ci­sion pro­ce­dures” and “crite­ria of right­ness”—partly ex­plains why pro­po­nents of FDT and other new de­ci­sion the­o­ries have not been able to get much trac­tion with aca­demic de­ci­sion the­o­rists. For ex­am­ple, causal de­ci­sion the­o­rists are well aware that peo­ple who always takes the ac­tions that CDT says they should take will tend to fare less well in New­comb sce­nar­ios than peo­ple who always take the ac­tions that EDT says they should take. Causal de­ci­sion the­o­rists are also well aware that that there are some sce­nar­ios—for ex­am­ple, a New­comb sce­nario with a perfect pre­dic­tor and the op­tion to get brain surgery to pre-com­mit your­self to one-box­ing—in which there is no available se­quence of ac­tions such that CDT says you should take each of the ac­tions in the se­quence. If you ask a causal de­ci­sion the­o­rist what sort of al­gorithm you should (ac­cord­ing to CDT) put into an AI sys­tem that will live in a world full of New­comb sce­nar­ios, if the AI sys­tem won’t have the op­por­tu­nity to self-mod­ify, then I think it’s safe to say a causal de­ci­sion the­o­rist won’t tell you to put in an al­gorithm that only pro­duces ac­tions that CDT says it should take. This tells me that we re­ally can’t fluidly switch back and forth be­tween mak­ing claims about the cor­rect­ness of nor­ma­tive prin­ci­ples and claims about the perfor­mance of al­gorithms, as though there were a canon­i­cal one-to-one map­ping be­tween these two sorts of claims. In­so­far as ra­tio­nal­ist writ­ing on de­ci­sion the­ory tends to do this sort of switch­ing, I sus­pect that it con­tributes to con­fu­sion on the part of many aca­demic read­ers. See also this blog post by an aca­demic de­ci­sion the­o­rist, Wolf­gang Sch­warz, for a much more thor­ough per­spec­tive on why pro­po­nents of FDT may be hav­ing difficulty get­ting trac­tion within the aca­demic de­ci­sion the­ory com­mu­nity. ↩︎

  13. A similar con­cern also leads me to as­sign low (p<10%) prob­a­bil­ity to nor­ma­tive de­ci­sion the­ory re­search ul­ti­mately be­ing use­ful for avoid­ing large-scale ac­ci­den­tal harm caused by AI sys­tems. It seems to me like the ques­tion “What is the cor­rect de­ci­sion the­ory?” only has an an­swer if we as­sume that re­al­ism is true. But even if we as­sume that re­al­ism is true, we are now ask­ing a nor­ma­tive ques­tion (“What crite­rion de­ter­mines whether an ac­tion is one an agent ‘should’ take?”) as a way of try­ing to make progress on a non-nor­ma­tive ques­tion (“What ap­proaches to de­sign­ing ad­vanced AI sys­tems re­sult in un­in­tended dis­asters and which do not?”). Pro­po­nents of CDT and pro­po­nents of EDT do not ac­tu­ally dis­agree on how any given agent will be­have, on what the causal out­come of as­sign­ing an agent a given al­gorithm will be, or on what ev­i­dence might be pro­vided by the choice to as­sign an agent a given al­gorithm; they both agree, for ex­am­ple, about how much money differ­ent agents will tend to earn in the clas­sic New­comb sce­nario. What de­ci­sion the­o­rists ap­pear to dis­agree about is a seper­ate nor­ma­tive ques­tion that floats above (or rather “su­per­venes” upon) ques­tions about ob­served be­hav­ior or ques­tions about out­comes. I don’t see how an­swer­ing this nor­ma­tive ques­tion could help us much in an­swer­ing the non-nor­ma­tive ques­tion of what ap­proaches to de­sign­ing ad­vanced AI sys­tems don’t (e.g.) re­sult in global catas­tro­phe. Put an­other way, my con­cern is that the strat­egy here seems to rely on the hope that we can de­rive an “is” from an “ought.”

    How­ever, in keep­ing with the above end­note, com­mu­nity work on de­ci­sion the­ory only some­times seems to be pitched (as it is in the ab­stract of this pa­per) as an ex­plo­ra­tion of nor­ma­tive prin­ci­ples. It is also some­times pitched as an ex­plo­ra­tion of how differ­ent “al­gorithms” “perform” across rele­vant sce­nar­ios. This ex­plo­ra­tion doesn’t seem to me to have any di­rect link to the core aca­demic de­ci­sion the­ory liter­a­ture and, given a suffi­ciently spe­cific perfor­mance met­ric, does not seem to be in­her­ently nor­ma­tive. I’m ac­tu­ally more op­ti­mistic, then, about this line of re­search hav­ing im­pli­ca­tions for AI de­vel­op­ment. Nonethe­less, for rea­sons similar to the ones de­scribed in the post “De­ci­sion The­ory Anti-Real­ism,” I’m still not very op­ti­mistic. In the cases that are be­ing con­sid­ered, the an­swer to the ques­tion “Which al­gorithm performs best?” will de­pend on sub­tle vari­a­tions in the set of coun­ter­fac­tu­als we con­sider when judg­ing perfor­mance; differ­ent al­gorithms come out on top for differ­ent sets of coun­ter­fac­tu­als. For ex­am­ple, in a pris­oner’s dilemma, the best-perform­ing al­gorithm will vary de­pend­ing on whether we are imag­ing a coun­ter­fac­tual world where just one agent was born with a differ­ent al­gorithm or a coun­ter­fac­tual world where both agents were born with differ­ent al­gorithms. It seems un­clear to me where we go from here ex­cept per­haps to list sev­eral differ­ent sets of imag­i­nary coun­ter­fac­tu­als and note which al­gorithms perform best rel­a­tive to them.

    Wolf­gang Sch­warz and Will MacAskill also make similar points, re­gard­ing the sen­si­tivity of com­par­i­sons of al­gorith­mic perfor­mance, in their es­says on FDT. Sch­warz writes:

    Yud­kowsky and Soares con­stantly talk about how FDT “out­performs” CDT, how FDT agents “achieve more util­ity”, how they “win”, etc. As we saw above, it is not at all ob­vi­ous that this is true. It de­pends, in part, on how perfor­mance is mea­sured. At one place, Yud­kowsky and Soares are more spe­cific. Here they say that “in all dilem­mas where the agent’s be­liefs are ac­cu­rate [??] and the out­come de­pends only on the agent’s ac­tual and coun­ter­fac­tual be­hav­ior in the dilemma at hand—rea­son­able con­straints on what we should con­sider “fair” dilem­mas—FDT performs at least as well as CDT and EDT (and of­ten bet­ter)”. OK. But how we should we un­der­stand “de­pends on … the dilemma at hand”? First, are we talk­ing about sub­junc­tive or ev­i­den­tial de­pen­dence? If we’re talk­ing about ev­i­den­tial de­pen­dence, EDT will of­ten out­perform FDT. And EDTers will say that’s the right stan­dard. CDTers will agree with FDTers that sub­junc­tive de­pen­dence is rele­vant, but they’ll in­sist that the stan­dard New­comb Prob­lem isn’t “fair” be­cause here the out­come (of both one-box­ing and two-box­ing) de­pends not only on the agent’s be­hav­ior in the pre­sent dilemma, but also on what’s in the opaque box, which is en­tirely out­side her con­trol. Similarly for all the other cases where FDT sup­pos­edly out­performs CDT. Now, I can vaguely see a read­ing of “de­pends on … the dilemma at hand” on which FDT agents re­ally do achieve higher long-run util­ity than CDT/​EDT agents in many “fair” prob­lems (al­though not in all). But this is a very spe­cial and pe­cu­liar read­ing, tai­lored to FDT. We don’t have any in­de­pen­dent, non-ques­tion-beg­ging crite­rion by which FDT always “out­performs” EDT and CDT across “fair” de­ci­sion prob­lems.

    MacAskill writes:

    [A]rgu­ing that FDT does best in a class of ‘fair’ prob­lems, with­out be­ing able to define what that class is or why it’s in­ter­est­ing, is a pretty weak ar­gu­ment. And, even if we could define such a class of cases, claiming that FDT ‘ap­pears to be su­pe­rior’ to EDT and CDT in the clas­sic cases in the liter­a­ture is sim­ply beg­ging the ques­tion: CDT ad­her­ents claims that two-box­ing is the right ac­tion (which gets you more ex­pected util­ity!) in New­comb’s prob­lem; EDT ad­her­ents claims that smok­ing is the right ac­tion (which gets you more ex­pected util­ity!) in the smok­ing le­sion. The ques­tion is which of these ac­counts is the right way to un­der­stand ‘ex­pected util­ity’; they’ll there­fore all differ on which of them do bet­ter in terms of get­ting ex­pected util­ity in these clas­sic cases.

    ↩︎
  14. In my view, the episte­molog­i­cal is­sues are the most se­vere ones. I think Sharon Street’s pa­per A Dar­wi­nian Dilemma for Real­ist The­o­ries of Value, for ex­am­ple, pre­sents an es­pe­cially hard-to-counter at­tack on the re­al­ist po­si­tion on episte­molog­i­cal grounds. She ar­gues that, in the light of the view that our brains evolved via nat­u­ral se­lec­tion, and nat­u­ral se­lec­tion did not and could not have di­rectly se­lected for the ac­cu­racy of our nor­ma­tive in­tu­itions, it is ex­tremely difficult to con­struct a com­pel­ling ex­pla­na­tion for why our nor­ma­tive in­tu­itions should be cor­re­lated in any way with nor­ma­tive facts. This tech­ni­cally leave open the pos­si­bil­ity of there be­ing non-triv­ial nor­ma­tive facts, with­out us hav­ing any way of per­ceiv­ing or in­tu­it­ing them, but this state of af­fairs would strike most peo­ple as ab­surd. Although some re­al­ists, in­clud­ing Parfit, have at­tempted to counter Street’s ar­gu­ment, I’m not aware of any­one who I feel has truly suc­ceeded. Street’s ar­gu­ment pretty much just seems to work to me. ↩︎

  15. Th­ese meta­phys­i­cal and episte­molog­i­cal is­sues be­come less con­cern­ing if we ac­cept some ver­sion of “nat­u­ral­ist re­al­ism” which as­serts that all nor­ma­tive claims can be re­duced into claims about the nat­u­ral world (i.e. claims about phys­i­cal and psy­cholog­i­cal prop­er­ties) and there­fore tested in roughly the same way we might test any other claim about the nat­u­ral world. How­ever, this view seems wrong to me.

    The bluntest ob­jec­tion to nat­u­ral­ist re­al­ism is what’s some­times called the “just-too-differ­ent” ob­jec­tion. This is the ob­jec­tion that, to many and per­haps most peo­ple, nor­ma­tive claims are just ob­vi­ously a differ­ent sort of claim. No one has ever felt any in­cli­na­tion to evoke an “is/​is-made-of-wood di­vide” or an “is/​is-ille­gal-in-Mas­sachusetts di­vide,” be­cause the prop­erty of be­ing made of wood and the prop­ery of be­ing ille­gal in Mas­sachusetts are ob­vi­ously prop­er­ties of the stan­dard (nat­u­ral) kind. But refer­ences to the “is/​ought di­vide”—or, equiv­a­lently, the dis­tinc­tion be­tween the “pos­i­tive” and the “nor­ma­tive”—are com­mon­place and don’t typ­i­cally pro­voke blank stares. Nor­ma­tive dis­cus­sions are, seem­ingly, about some­thing above-and-be­yond and dis­tinct from dis­cus­sions of the phys­i­cal and psy­cholog­i­cal as­pects of a situ­a­tion. When peo­ple de­bate whether or not it’s “wrong” to sup­port the death penalty or “wrong” for women to abort un­wanted preg­nan­cies, for ex­am­ple, it seems ob­vi­ous that phys­i­cal and psy­cholog­i­cal facts are typ­i­cally not the core (or at least only) thing in dis­pute.

    G.E. Moore’s “Open Ques­tion Ar­gu­ment” elab­o­rates on this ob­jec­tion. The ar­gu­ment also raises the point that that, in many cases where we are in­clined to ask “What should I do?”, it seems like what we are in­clined to ask goes above-and-be­yond any in­di­vi­d­ual ques­tion we might ask about the nat­u­ral world. Con­sider again the case where we are con­sid­er­ing a ca­reer change and won­der­ing what we should do. It seems like we could know all of the nat­u­ral facts—facts like how happy will we be on av­er­age while pur­su­ing each ca­reer, how satis­fied will we feel look­ing back on each ca­reer, how many lives we could im­prove by donat­ing money made in each ca­reer, what la­bor prac­tices each com­pany has, how dis­ap­pointed our par­ents will be if we pur­sue each ca­reer, how our per­sonal val­ues will change if we pur­sue each ca­reer, what we would end up de­cid­ing at the end of one hy­po­thet­i­cal de­liber­a­tive pro­cess or an­other, etc. -- and still re­tain the in­cli­na­tion to ask, “Given all this, what should I do?” This means that—in­so­far as we’re tak­ing the re­al­ist stance that this ques­tion ac­tu­ally has a mean­ingful an­swer, rather than re­ject­ing the ques­tion as vac­u­ous—the claim that we “should” do one thing or an­other can­not eas­ily be un­der­stood as a claim about the nat­u­ral world. A set of claims about the nat­u­ral world may sup­port the claim that we should make a cer­tain de­ci­sion, but, in cases such as this one, it seems like no set of claims about the nat­u­ral world is equiv­a­lent to the claim that we should make a cer­tain de­ci­sion.

    A last ob­jec­tion to men­tion is Parfit’s “Triv­ial­ity Ob­jec­tion” (On What Mat­ters, Sec­tion 95). The ba­sic in­tu­ition be­hind Parfit’s ob­jec­tion is that pretty much any at­tempt to define the word “should” in terms of nat­u­ral prop­er­ties would turn many nor­ma­tive claims into puz­zling as­ser­tions of ei­ther ob­vi­ous tau­tolo­gies or ob­vi­ous false­hoods. For ex­am­ple, con­sider a man who is offered—at the end of his life, I guess by the devil or some­thing—the op­tion of un­der­go­ing a year of cer­tain tor­ture for a one-in-a-trillion chance of re­ceiv­ing a big prize: a trillion years of an equiv­a­lently pow­er­ful pos­i­tive ex­pe­rience, plus a sin­gle lol­lipop. He is purely in­ter­ested in ex­pe­rienc­ing plea­sure and avoid­ing pain and would like to know whether he should take the offer. A de­ci­sion the­o­rist who en­dorses ex­pected de­sire-fulfill­ment max­imi­sa­tion says that he “should,” since the lol­lipop tips the offer over into hav­ing slightly pos­i­tive ex­pected value. A de­ci­sion the­o­rist who en­dorses risk aver­sion says he “should not,” since the man is nearly cer­tain to be hor­ribly tor­tured with­out re­ceiv­ing any sort of com­pen­sa­tion. In this con­text, it’s hard to un­der­stand how we could re­define the claim “He should take ac­tion X” in terms of nat­u­ral prop­er­ties and have this dis­agree­ment make any sense. We could define the phrase as mean­ing “Ac­tion X max­i­mizes ex­pected fulfill­ment of de­sire,” but now the first de­ci­sion the­o­rist is ex­press­ing an ob­vi­ous tau­tol­ogy and the sec­ond de­ci­sion the­o­rist is ex­press­ing an ob­vi­ous false­hood. We could also try, in keep­ing with a sug­ges­tion by Eliezer Yud­kowsky, to define the phrase as mean­ing “Ac­tion X is the one that some­one act­ing in a win­ning way would take.” But this is ob­vi­ously too vague to im­ply a par­tic­u­lar ac­tion; tak­ing the gam­ble is as­so­ci­ated with some chance of win­ning and some chance of los­ing. We could make the defi­ni­tion more spe­cific—for in­stance, say­ing “Ac­tion X is the one that some­one act­ing in a way that max­i­mizes ex­pected win­ning would take”—but now of course we’re back in tau­tol­ogy mode. The ap­par­ent up­shot, here, is that many nor­ma­tive claims sim­ply can’t be in­ter­preted as non-triv­ially true or non-triv­ially false claims about nat­u­ral prop­er­ties. The as­so­ci­ated dis­agree­ments only be­come sen­si­ble if we in­ter­pret them as be­ing about some­thing above-and-be­yond these prop­er­ties.

    Of course, it is surely true that some of the claims peo­ple make us­ing the word “should” can be un­der­stood as claims about the nat­u­ral world. Words can, af­ter all, be used in many differ­ent ways. But it’s the claims that can’t eas­ily be un­der­stood in this way that non-nat­u­ral­ist re­al­ists such as Parfit, Enoch, and Moore have in mind. In gen­eral, I agree with the view that the key di­vi­sion in metaethics is be­tween self-iden­ti­fied non-nat­u­ral­ist re­al­ists on the one hand and self-iden­ti­fied anti-re­al­ists and nat­u­ral­ist re­al­ists on the other hand, since “nat­u­ral­ist re­al­ists” are in fact anti-re­al­ists with re­gard to the dis­tinc­tively nor­ma­tive prop­er­ties of de­ci­sions that non-nat­u­ral­ist re­al­ists are talk­ing about. If we rule out non-nat­u­ral­ist re­al­ism as a po­si­tion then it seems the main re­main­ing ques­tion is a some­what bor­ing one about se­man­tics: When some­one makes a state­ment of form “A should do X,” are they most com­monly ex­press­ing some sort of at­ti­tude (non-cog­ni­tivism), mak­ing a claim about the nat­u­ral world (nat­u­ral­ist re­al­ism), or mak­ing a claim about some made-up prop­erty that no ac­tions ac­tu­ally pos­sess (er­ror the­ory)?

    Here, for ex­am­ple, is how Michael Hue­mer (a non-nat­u­ral­ist re­al­ist) ex­presses this point in his book Eth­i­cal In­tu­ition­ism (pg. 8):

    [Non-nat­u­ral­ist re­al­ists] differ fun­da­men­tally from ev­ery­one else in their view of the world. [Nat­u­ral­ist re­al­ists], non-cog­ni­tivists, and nihilists all agree in their ba­sic view of the world, for they have no sig­nifi­cant dis­agree­ments about what the non-eval­u­a­tive facts are, and they all agree that there are no fur­ther facts over and above those. They agree, for ex­am­ple, on the non-eval­u­a­tive prop­er­ties of the act of steal­ing, and they agree, con­tra the [non-nat­u­ral­ist re­al­ists], that there is no fur­ther, dis­tinc­tively eval­u­a­tive prop­erty of the act. Then what sort of dis­pute do the [three] monis­tic the­o­ries have? I be­lieve that, though this is not gen­er­ally rec­og­nized, their dis­putes with each other are merely se­man­tic. Once the na­ture of the world ‘out there’ has been agreed upon, se­man­tic dis­putes are all that is left.

    I think this at­ti­tude is in line with the view­point that Luke Muehlhauser ex­presses in his clas­sic LessWrong blog post on what he calls “plu­ral­is­tic moral re­duc­tion­ism.” PMR seems to me to be the view that: (a) non-nat­u­ral­ist re­al­ism is false, (b) all re­main­ing meta-nor­ma­tive dis­putes are purely se­man­tic, and (c) purely se­man­tic dis­putes aren’t ter­ribly sub­stan­tive and of­ten re­flect a failure to ac­cept that the same phrase can be used in differ­ent ways. If we define the view this way, then, con­di­tional on non-nat­u­ral­ist re­al­ism be­ing false, I be­lieve that PMR is the cor­rect view. I be­lieve that many non-nat­u­ral­ist re­al­ists would agree on this point as well. ↩︎

  16. This point is made by Parfit in On What Mat­ters. He writes: “We could not have de­ci­sive rea­sons to be­lieve that there are no such nor­ma­tive truths, since the fact that we had these rea­sons would it­self have to be one such truth. This point may not re­fute this kind of skep­ti­cism, since some skep­ti­cal ar­gu­ments might suc­ceed even if they un­der­mined them­selves. But this point shows how deep such skep­ti­cism goes, and how blank this skep­ti­cal state of mind would be” (On What Mat­ters, Sec­tion 86). ↩︎

  17. The PhilPapers sur­vey sug­gests that philoso­phers who fa­vor re­al­ism out­weigh philoso­phers who fa­vor anti-re­al­ism by about a 2:1 ra­tio. ↩︎