Two Dogmas of LessWrong

Lesswrongers seem to largely be in agreement in rejecting robust moral realism and accepting physicalism about consciousness. This is a shame, I think, because both of these views are incorrect. The thing I find most frustrating about this is they tend to be supremely confident on topics when a hefty percentage of philosophers disagree with them. ‘Don’t believe things that are widespread in your ingroup with super high confidence when a large percentage of philosophers disagree with you’ seems to be a pretty good heuristic—yet LessWrongers seem not to adhere to it, at least, in evaluating these views.

I’d like to say at the outset a few things. First, this clearly doesn’t apply to all people on LessWrong. Second, I agree with LessWrongers on a huge number of things—AI risks, for example, as well as the desirability of effective altruism. I’m broadly on board with the project of being less wrong. Thus, my criticism of LW is less of the idea behind it, and more of the particular sets of beliefs that actual LessWrongers tend to have. Third, most of this will be a crosspost of things I’ve written elsewhere on my blog—I have no desire to reinvent the wheel when it comes to arguments for moral realism.

1 Moral Realism

0 An Introduction to moral realism

There are vast numbers of superficially clever arguments one can generate for crazy, skeptical conclusions; conclusions like that the external world doesn’t exist, we can’t know anything, memory isn’t reliable, and so on. These arguments, while interesting and no doubt useful if one ever comes across a real honest-to-god skeptic — a rather rare breed — don’t have much significance; skepticism exists as little more than a curiosity in the mind of the modern philosopher, something which takes real thought to refute, yet is not worth taking seriously as a serious set of views.

Yet there’s one1 form of extreme skepticism with actually existing trenchant advocates — real advocates who fill philosophy departments, rather than, like the external world or memory skeptic, merely being hypothetical advocates for the devil in philosophy papers. This skeptic is one who doubts that there are objective moral truths — moral facts made true not by the beliefs of any person.

Moral realism is the claim that there are true moral facts — ones that are not made true by anyone’s attitudes towards them. So if you think that the sentence that will follow this one is true and would be so even if no one else thought it was, you’re a moral realist. It’s typically wrong to torture infants for fun!

Now, no doubt to the moral anti-realist, my remarks sound harsh. How dare I compare them to the person who doubts anything can be really known.

“I refute the skeptic about the external world thus — ‘here’s one hand; here’s another hand,’ now show me the moral facts.”

— A hypothetical moral anti-realist

Well, in this article, I’ll explain why moral anti-realism is so implausible — while one always can accept the anti-realist conclusion, it’s always possible to bite the bullet on crazy conclusions. Yet moral anti-realism, much like anti-realism about the external world, is wildly implausible in what it says about the world.

We do not live in a bleak world, devoid of meaning and value. Our world is packed with value, positively buzzing with it, at least, if you know where to look, and don’t fall pray to crazy skepticism. Unfortunately the flip side of that is that the world is also packed full of disvalue — horrific, agonizing, pointless, meaningless suffering, suffering that flips the otherwise positive value of the hedonic register — that suffering must be eliminated as soon as possible. It is a moral emergency every second that it goes on.

In this article, I will defend moral realism. I will defend that it is, in fact, wrong to torture infants for fun — even if everyone disagreed. It’s no surprise that Moral realism is accepted by a majority of philosophers, though it’s certainly far from a universal view.

1 A Point About Methodology

Seeming is believing — as I hope to argue. Or, more specifically, if X seems the case to you, in general, that gives you some reason to think X is, in fact, the case. I’ve already addressed this in a previous article, so I’ll quote that.

Absent relying on what seems to be the case after careful reflection, we could know nothing, as (Huemer, 2007) has argued persuasively. Several cases show that intuitions are indispensable towards having any knowledge and doing any productive moral reasoning.

Any argument against intuitions is one that we’d only accept if it seems true after reflection, which once again relies on seemings. Thus, rejection of intuitions is self-defeating, because we wouldn’t accept it if its premises didn’t seem true.

Any time we consider any view which has some arguments both for and against it, we can only rely on our seemings to conclude which argument is stronger. For example, when deciding whether or not god exists, most would be willing to grant that there is some evidence on both sides. The probability of existence on theism is higher than on atheism, for example, because theism entails that something exists, while the probability of god being hidden is higher on atheism, because the probability of god revealing himself on atheism is zero. Thus, there are arguments on both sides, so any time we evaluate whether theism is true, we must compare the strength of the evidence on both sides. This will require reliance on seemings. The same broad principle is true for any issue we evaluate, be it religious, philosophical, or political.

Consider a series of things we take to be true which we can’t verify. Examples include the laws of logic would hold in a parallel universe, things can’t have a color without a shape, the laws of physics could have been different, implicit in any moral claim about x being bad there is a counterfactual claim that had x not occurred things would be better, and assuming space is not curved the shortest different between any two points is a straight line. We can’t verify those claims directly, but we’re justified in believing them because they seem true—we can intuitively grasp that they are justified.

The basic axioms of reasoning also offer an illustrative example. We are justified in accepting induction, the reliability of the external world, the universality of the laws of logic, the axioms of mathematics, and the basic reliability of our memory, even if we haven’t worked out rigorous philosophical justifications for those things. This is because they seem true.

Our starting intuitions are not always perfect, and they can be overcome by other things that seem true.

Maybe you’re not a phenomenal conservative. Perhaps you think that in some cases, intuitions don’t serve as justification. However, we should all accept the following more modest principle.

Wise Phenomenal Conservatism: If P seems true upon careful reflection from competent observers, that gives us some prima facie reason to believe P.

This allows us to sidestep the main objections to phenomenal conservatism listed here.

Responding to the crazy appearances objection

Some critics have worried that phenomenal conservatism commits us to saying that all sorts of crazy propositions could be non-inferentially justified. Suppose that when I see a certain walnut tree, it just seems to me that the tree was planted on April 24, 1914 (this example is from Markie 2005, p. 357). This seeming comes completely out of the blue, unrelated to anything else about my experience – there is no date-of-planting sign on the tree, for example; I am just suffering from a brain malfunction. If PC is true, then as long as I have no reason to doubt my experience, I have some justification for believing that the tree was planted on that date.

More ominously, suppose that it just seems to me that a certain religion is true, and that I should kill anyone who does not subscribe to the one true religion. I have no evidence either for or against these propositions other than that they just seem true to me (this example is from Tooley 2013, section 5.1.2). If PC is true, then I would be justified (to some degree) in thinking that I should kill everyone who fails to subscribe to the “true” religion. And perhaps I would then be morally justified in actually trying to kill these “infidels” (as Littlejohn [2011] worries).

But in the case of a person to whom a certain religion seems true, this is no doubt not after careful, prolonged rational reflection in which they consider all of the facts. If a very rational person considered all the facts and religion still seemed to have prima facie justification, it seems they would be justified in thinking religion is true. This objection is also diffused by Huemer’s responses to it.

Phenomenal conservatives are likely to bravely embrace the possibility of justified beliefs in “crazy” (to us) propositions, while adding a few comments to reduce the shock of doing so. To begin with, any actual person with anything like normal background knowledge and experience would in fact have defeaters for the beliefs mentioned in these examples (people can’t normally tell when a tree was planted by looking at it; there are many conflicting religions; religious beliefs tend to be determined by one’s upbringing; and so on).

We could try to imagine cases in which the subjects had no such background information. This, however, would render the scenarios even more strange than they already are. And this is a problem for two reasons. First, it is very difficult to vividly imagine these scenarios. Markie’s walnut tree scenario is particularly hard to imagine – what is it like to have an experience of a tree’s seeming to have been planted on April 24, 1914? Is it even possible for a human being to have such an experience? The difficulty of vividly imagining a scenario should undermine our confidence in any reported intuitions about that scenario.

The second problem is that our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S’, which we encounter or think about quite a bit, and (iii) the correct judgment about S’ is different from the correct judgment about S. For instance, in the actual world, people who think they should kill infidels are highly irrational in general and extremely unjustified in that belief in particular. It is not hard to see how this would incline us to say that the characters in Tooley’s and Littlejohn’s examples are also irrational. That is, even if PC were true, it seems likely that a fair number of people would report the intuition that the hypothetical religious fanatics are unjustified.

A further observation relevant to the religious example is that the practical consequences of a belief may impact the degree of epistemic justification that one needs in order to be justified in acting on the belief, such that a belief with extremely serious practical consequences may call for a higher degree of justification and a stronger effort at investigation than would be the case for a belief with less serious consequences. PC only speaks of one’s having some justification for believing P; it does not entail that this is a sufficient degree of justification for taking action based on P.

There’s certainly much more to be said on this topic, only a minuscule portion of which I can discuss in this article. However, in philosophy, it’s pretty widely accepted that what seems to be the case probably is the case, all else equal, in at least most cases. One can accept epistemic particularism, for example, and still accept this modest requirement.

Responding to the alleged defeaters in the moral domain

Walter Sinnott-Armstrong argues that we need extra justification in some sorts of cases. If a person had a belief in a proposition purely as a result of self-interested motivated reasoning, their seeming wouldn’t be justified. Thus, he argues a constraint for accepting a belief to garner prima facie justification is the following

Principle 1: confirmation is needed for a believer to be justified when the believer is partial.

However, as Ballantyne and Thurrow note, this isn’t a blanket defeater for our moral beliefs; rather, this is only a defeater for the subset of our moral beliefs that are likely to be caused in some way by partial considerations.

So now the question is whether, in regards to the specific thought experiments I’ll appeal to in defending moral realism, they are plausibly caused by partiality. We’ll investigate this more in regard to the specific thought experiments that I’ll appeal to.

However, one thing is worth noting. Utilitarianism seems to have a plausible route to avoiding these objections. Utilitarianism is frequently chided for being too demanding, for being too impartial. Thus, this gives us a good reason to revise the intuitions of utilitarianism’s rivals, though not utilitarianism.

This principle is also too broad. Let’s imagine that all people had self-interested reasons to believe in core logical or mathematical facts. This wouldn’t mean we should reject the core truth of modus ponens or the core mathematical axioms. Perhaps it would undercut the intuition, but it wouldn’t be enough to totally eliminate the intuition.

This is one worry I have with Armstrong’s approach. He seems much too willing to divide intuitions into two distinct classes: justified and unjustified. However, justification comes in degrees. Declaring an intuition flat out justified or flat out unjustified seems to be a mistake — just like declaring a food hot or cold would be unwise, if one were attempting to make precise judgments about the average temperature of a room.

Armstrong’s next constraint is the following.

Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.

Several points are worth making. First, the intuitions I’m appealing to are very widespread — not many people lack the intuitions to which I’ll appeal. Perhaps some people end up reflectively rejecting those intuitions, but people tend to have the intuitions. Thus, we need not revise these intuitions in light of those who disagree. I’ll defend this more later.

Second, given that most philosophers are moral realists, it seems that most relevant domain experts find the intuitions appealing. If they didn’t, they almost surely wouldn’t be moral realists.

Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.

All of our decisions are clouded by emotion to some degree. That does not mean that we should abandon all of our judgments. Again, rather than seeing things as a yes/​no question of whether or not our intuitions are justified, it makes far more sense to see justification as coming in degrees. The more emotional we are, the less we should trust our intuitions. However, we shouldn’t throw out all of our intuitions based merely on our omnipresent emotions.

Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.

With my traditional caveat about justification in coming in degrees, this seems mostly correct.

Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source


2 Some Intuitions That Support Moral Realism

The most commonly cited objection to moral anti-realism in the literature is that it’s unintuitive. There is a vast wealth of scenarios in which anti-realism ends up being very counterintuitive. We’ll divide things up more specifically; each particular version of anti-realism has special cases in which it delivers exceptionally unintuitive results. Here are two cases

This first case is the thing that convinced me of moral realism originally. Consider the world as it was at the time of the dinosaurs before anyone had any moral beliefs. Think about scenarios in which dinosaurs experienced immense agony, having their throats ripped out by other dinosaurs. It seems really, really obvious that that was bad.

The thing that’s bad about having one’s throat ripped out has nothing to do with the opinions of moral observers. Rather, it has to do with the actual badness of having one’s throat ripped out by a T-Rex. When we think about what’s bad about pain, anti-realists get the order of explanation wrong. We think that pain is bad because it is — it’s not bad merely because we think it is.

The second broad, general case is of the following variety. Take any action — torturing infants for fun is a good example because pretty much everyone agrees that it’s the type of thing you generally shouldn’t do. It really seems like the following sentence is true

“It’s wrong to torture infants for fun, and it would be wrong to do so even if everyone thought it wasn’t wrong.”

Similarly, if there were a society that thought that they were religiously commanded to peck out the eyes of infants, they would be doing something really wrong. This would be so even if every single person in that society thought it wasn’t wrong.

Everyone could think it’s okay to torture animals in factory farms, and it would still be horrifically immoral.

This becomes especially clear when we consider moral questions that we’re not sure about. When we try to make a decision about whether abortion is wrong, or eating meat, we’re trying to discover, not invent, the answer. If the answer were just whatever we or someone else said it was — or if there was no answer — then it would make no sense to deliberate about whether or not it was wrong.

Whenever you argue about morality, it seems you are assuming that there is some right answer — and that answer isn’t made sense by anyone’s attitude towards it.

Let’s see whether these results can be debunked as a result of biasing factors.

Principle 1: confirmation is needed for a believer to be justified when the believer is partial.

I’m not particularly partial about whether the dinosaur’s suffering was bad. It has little emotional impact on me and I am not a dinosaur. Additionally, I’m not very partial on the question of whether torturing infants would be wrong even if everyone thought it wasn’t wrong — this will never affect me, and the moral facts themselves are causally inert. Thus, this judgment can’t be debunked by partiality considerations.

Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.

Very few people disagree, at least based on initial intuitions, with the judgments I’ve laid out. I did a small poll of people on Twitter, asking the question of whether it would be wrong to torture infants for fun, and would be so even if no one thought it was. So far, 82.6% of people have been in agreement.

There are some people who disagree. However, there is almost inevitable disagreement. If disagreement made us abandon our beliefs, we’d abandon our beliefs in political claims, because there’s way more disagreement about political claims than there is about the claim that it’s typically wrong to torture infants for fun.

Also, those who disagree tend to have views that I think are factually mistaken on independent grounds. Anti-realists seem more likely to adopt other claims that I find implausible. Additionally, they tend to make the error of not placing significant weight on moral intuitions. Thus, I think we have independent reasons to prefer the belief in realism.

It also seems like a lot of the anti-realists who don’t find the sentence “it’s typically wrong to torture infants for fun and would be so even if everyone disagreed” intuitive, tend to be confused about what moral statements mean — about what it means to say that things are wrong. I, on the other hand, like most moral realists, and indeed many anti-realists, understand what the sentence means. Thus, I have direct acquaintance to the coherence of moral sentences — I directly understand what it means to say that things are bad or wrong.

If it turned out that a lot of the skeptics of quantum mechanics just turned out to not understand the theory, that would give us good reason to discount their views. This seems to be pretty much the situation in the moral domain.

Additionally, given that most philosophers are moral realists, we have good reason to find it the more intuitively plausible view. If the consensus of people who have carefully studied an issue tends to support moral realism, this gives us good reason to think that moral realism is true. The wisdom of the crowds tends to be greater than that of any individual.

Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.

I’m really not particularly emotional about the notion that dinosaur suffering was bad. Nor do I have a particularly strong emotional reaction to some types of wrong actions, say tax fraud. If there was a type of tax fraud that decreased aggregate utility, I’d think it was wrong, even if everyone thought it wasn’t. I have no emotional attachment to that belief.

Additionally, we have good evidence from the dual process literature that careful, prolonged reflection tends to be what causes utilitarian beliefs — it’s the unreliable emotional reactions that causes our non-utilitarian beliefs. Thus, at best, this would give a reason to revise our non-utilitarian beliefs. I’ll quote an article I wrote on the subject.

One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.

A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.

Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.

Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgments. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.

Additionally, there are lots of moral results that seem to be backed by no emotional results. For example, I accept the repugnant conclusion, though I have no emotional attachment to doing so.

Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.

We have no reason to think that beliefs in the moral domain — particularly ones that reach reflective equilibrium — are particularly susceptible to illusion. This is especially true of the consequentialist ones.

Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source

This isn’t true of moral belief. The belief that dinosaur suffering was bad, even before any person had ever formed that thought, was formed through careful reflection on the nature of their suffering — it wasn’t on the basis of anything else.

What if the folk think differently

I’m supremely confident that if you asked the folk whether it would be typically wrong to torture infants for fun, even if no one thought it was, they’d tend to say yes. Additionally, it turns out that The Folk Probably do Think What you Think They Think.

Also, I trust the reflective judgment of myself and qualified philosophers significantly more than I trust the folk. Sorry folk!

Classifying anti-realists

Given that, as previously discussed, moral realism is the view that there are true moral statements, that are true independently of people’s beliefs about them, there are three ways to deny it.

Non-cognitivism — this says that moral statements are neither true nor false; they’re not in the business of being true or false. On this view, moral statements are not truth-apt, in that they can’t be true or false. There are lots of sentences that are not truth-apt — examples include “shut the door.” Shut the door isn’t true or false.

Error theory — this says that moral statements, much like statements about witches, try to state facts, but they are systematically false. For example, if a person says ‘witches can fly and cast spells’ they think they’re saying something true, but they falsely believe in a vast category of things that aren’t real, namely, witches. Thus, all positive statements about morality, much like all positive statements about witches according to error theory, turn out to be false.

Subjectivism — this says that moral statements are hinge on people’s attitudes towards them. There are different versions of subjectivism — they’re all implausible.

It turns out that each of these views has especially implausible results, ones not shared by the other three.


Non-cognitivists think that moral statements are not truth apt. A non-cognitivist might think that saying murder is wrong really means boo! murder, or don’t murder! I’ve already explained why I think non-cognitivism is super implausible, which I’ll quote here.

“It’s wrong to torture infants for fun, most of the time,”

is neither true nor false

The statement

“If it’s wrong to torture infants, then I shouldn’t torture infants

It’s wrong to torture infants

Therefore, I shouldn’t torture infants”

Is incoherent. It’s like saying if shut the door then open the window, shut the door, therefore, open the window.

Additionally, as Huemer says on pages 20-21, describing the reasons to think moral statements are propositional

(a) Evaluative statements take the form of declarative sentences, rather than, say, imperatives, questions, or interjections. ‘Pleasure is good’ has the same grammatical form as ‘Weasels are mammals’. Sentences of this form are normally used to make factual assertions. )] In contrast, the paradigms of non-cognitive utterances, such as ‘Hurray for x’ and ‘Pursue x’, are not declarative sentences.

(b) Moral predicates can be transformed into abstract nouns, suggesting that they are intended to refer to properties; we talk about ‘goodness’, ‘rightness’, and so on, as in ‘I am not questioning the act’s prudence, but its rightness’.

(c) We ascribe to evaluations the same sort of properties as other propositions. You can say, ‘It is true that I have done some wrong things in the past’, ‘It is false that contraception is murder’, and ‘It is possible that abortion is wrong’. ‘True’, ‘false’, and ‘possible’ are predicates that we apply only to propositions. No one would say, ‘It is true that ouch’, ‘It is false that shut the door’, or ‘It is possible that hurray’.

(d) All the propositional attitude verbs can be prefixed to evaluative statements. We can say, ‘Jon believes that the war was just’, ‘I hope I did the right thing’, ‘I wish we had a better President’, and ‘I wonder whether I did the right thing’. In contrast, no one would say, ‘Jon believes that ouch’, ‘I hope that hurray for the Broncos’, ‘I wish that shut the door’, or ‘I wonder whether please pass the salt’. The obvious explanation is that such I11ental states as believing, hoping, wishing, and wondering are by their nature propositional: To hope is to hope that something is the case, to wonder is to wonder whether something is the case, and so on. That is why one cannot hope that one did the right thing unless there is a proposition-something that might be the case-corresponding to the expression ‘one did the right thing’.

(e) Evaluative statements can be transformed into yes/​no questions: One can assert ‘Cinnamon ice cream is good’, but one can also ask, ‘Is cinnamon ice cream good?’ No analogous questions can be formed from imperatives or emotional expressions: ‘Shut the door?’ and ‘Hurray for the Broncos?’ lack clear meaning. The obvious explanation is that a yes/​no question requires a proposition; it asks whether something is the case. A prescriptivist non-cognitivist might interpret some evaluative yes/​no questions as requests for instruction, as in ‘Should I shut off the oven now?’ But other questions would defy interpretation along these lines, including evaluative questions about other people’s behavior or about the past-t—Was it wrong for Emperor Nero to kill Agrippina?′ is not a request for instruction.

(f) One can issue imperatives and emotional expressions directed at things that are characterized morally. If non-cognitivism is true, what do these mean: ‘Do the right thing.’ ‘Hurray for virtue!’ Even more puzzlingly for the non-cognitivist, you can imagine appropriate contexts for such remarks as, ‘We shouldn’t be doing this, but I don’t care; let’s do it anyway’. This is perfectly intelligible, but it would be unintelligible if ‘We shouldn’t be doing this’ either expressed an aversive emotion towards the proposed action or issued an imperative not to do it.

(g) In some sentences, evaluative terms appear without the speaker’s either endorsing or impugning anything, yet the terms are used in their normal senses. This is known as the Frege-Geach problem and forms the basis for perhaps the best-known objection to noncognitivism.

Error Theory

Error theory says that all positive moral statements are false. Error theory is best described as in error theory, because of how sharply it diverges from the truth. It runs into a problem — there are obviously some true moral statements. Consider the following six examples.

What the icebox killers did was wrong.

The holocaust was immoral.

Torturing infants for fun is typically wrong.

Burning people at the stake is wrong.

It is immoral to cause innocent people to experience infinite torture.

Pleasure is better than pain.

The error theorist has to say that the meaning of those terms is exactly the same as what the realist thinks. The error theorist has to think that when people say the holocaust is bad, they’re actually making a mistake. However, this is terribly implausible. It really, really doesn’t seem like the claim ‘the holocaust is bad’ is mistaken.

Any argument for error theory will be way less intuitive than the notion that the Holocaust was, in fact, bad.

Let’s test these intuitions.

Principle 1: confirmation is needed for a believer to be justified when the believer is partial.

I’m not really that partial about many things I take to be bad. I think malaria is bad, despite not being personally affected by malaria. Similarly, I am in no way harmed by most of history’s evils — including hypothetical evils that have never been experienced, but that I recognized would be bad if experienced.

On top of this, this may be a reason to rethink the intuition somewhat, but it’s certainly not a reason to just throw out any intuition stemming from

Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.

Very few people disagree that the notion that it’s wrong to cause infinite torture is intuitive.

Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.

Being emotional does reduce the probative force of intuitions. However, it does not suffice to debunk an intuition — we cannot merely disregard intuitions because there’s some emotional impact. But also, I’m not particularly emotional when I consider suffering in the abstract. It still seems clearly bad.

The responses to four and five from above still apply.


Subjectivism holds that moral facts depend on some people’s beliefs or desires. This could be the desires of a culture — if so, it’s called cultural relativism.

Cultural Relativism: Crazy, Illogical, and Accepted by no One Except Philosophically Illiterate Gender Studies Majors

Cultural relativism is — as the sub-header suggested — something that I find rather implausible. There are no serious philosophers that I know of who defend cultural relativism. One is a cultural relativist if they think that something is right if a society thinks that it is right.

Problem: it’s obviously false. Consider a few examples.

Imagine the Nazis convinced everyone that their holocaust was good. This would clearly not make it good.

Imagine there was a society that was in universal agreement that all babies should be tortured to death in a maximally horrible and brutal way. That wouldn’t be objectively good.

People often accept cultural relativism because they’re vaguely confused and want to be tolerant. But if cultural relativism is true, then tolerance is only good if supported by the broader culture. On cultural relativism, disagreeing with the norms of one’s broader culture is incoherent. Saying my culture is acting wrongly is just a contradiction in terms. Yet that’s clearly absurd.

This also means that if two different cultures argue about which norm is correct, they’re arguing about nothing. If norms are relative to a culture then there’s no fact of the matter about which culture is correct. But that’s absurd; the Nazis were worse than non-Nazis.

To quote my previous article on the subject

If it’s determined by society the following statements are false

“My society is immoral when it tortures infants for fun.”

“Nazi Germany acted immorally.”

“Some societal practices are immoral.”

“When society chops off the fingers and toes of small children based on their skin color, that’s immoral.”

“It’s immoral for society to boil children in pots.”

Individual Subjectivism

Individual subjectivism says that morality is determined by the attitude of the speaker. The statement murder is wrong means “I disapprove of murder.” There are, of course, more subtle versions, but this is likely the main version.

I’ve already given objections in my previous article on the subject.

If it’s determined by the moral system of the speaker the following claims are true.

“When the Nazi whose ethical system held that the primary ethical obligation was killing jews said “It is moral to kill jews,”” they were right.

“When slave owners said ‘the interests of slaves don’t matter,’ they were right.”

“When Caligula says “It is good to torture people,” and does so, he’s right”

“The person who thinks that it’s good to maximize suffering is right when he says “it’s moral to set little kids on fire””

Additionally, when I say “we should be utilitarians,” and Kant says “we shouldn’t be utilitarians,” we’re not actually disagreeing.

Conclusion of this section

So, I think that the moral conclusions of moral anti-realism are absurd. It holds that wrongness either isn’t real or depends on our desires in some way. But that’s just wrong! It is well and truly wrong to torture infants to death, and it would be so even if no one agreed.

3 Irrational Desires

The fool says in his heart ‘I have future Tuesday indifference.’

The argument I intend to lay out is relatively simple in its essence, relatively drab, and yet quite forceful.

1 If moral realism is not true, then we don’t have irrational desires

2 We do have irrational desires

Therefore, moral realism is true

Defending premise 1

Premise one seems the most controversial to laypersons, but it is premise 2 that is disputed by the philosophical anti-realists. Morality is about what we have reason to do — impartial reason, to be specific. These reasons are not dependent on our desires.

Morality thus describes what reasons we have to do things, unmoored from our desires. When one claims it’s wrong to murder, they mean that, even were one to desires murdering another, they shouldn’t do it — they have a reason not to do it, independent of desires.

Thus, the argument for premise one is as follows.

1 If there are desire independent reasons, there are impartial desire independent reasons

2 If there are impartial desire independent reasons, morality is objective

Therefore, morality is objective.

Premise 2 is true by definition. Premise 1 is trivial — impartial desire independent reasons are just identical to non-impartial desire independent reasons, but adding in a requirement of impartiality. This can be achieved by, for example, making decisions from behind the veil of ignorance — or some other similar system.

Thus, if you actual have reasons to have particular desires — to aim for particular things, then morality is objective. Let’s now investigate that assumption.

Defending Premise 2

Premise 2 states that there are, in fact, irrational desires. This premise is obvious enough.

Note here I use desire in a broad sense. By desire I do not mean what merely enjoys; that obviously can’t be irrational. My preference for chocolate ice-cream over vanilla ice cream clearly cannot be in error. Rather, I use desire in a broad sense to indicate one’s ultimate aims, in light of the things that they enjoy. I’ll use desire, broad aims, goals, and ultimate goals interchangeably.

Thus, the question is not whether one who prefers chocolate to vanilla is a fool. Instead, it’s whether someone who prefers chocolate to vanilla but gets vanilla for no reason is acting foolishly.

The anti-realist is in the difficult position of denying one of the most evident facts of the human condition — that we can be fools not merely in how we get what we want but in what we want in the first place.

Consider the following cases.

1 Future Tuesday Indifference2: A person doesn’t care what happens to them on a future Tuesday. When Tuesday rolls around, they care a great deal about what happens to them; they’re just indifferent to happenings on a future Tuesday. This person is given the following gamble — they can either get a pinprick on Monday or endure the fires of hell on Tuesday. If they endure the fires of hell on Tuesday, this will not merely affect what happens this Tuesday — every Tuesday until the sun burns out shall be accompanied by unfathomable misery — the likes of which can’t be imagined, next to which the collective misery of history’s worst atrocities is but a paltry, vanishing scintilla.

They know that when Tuesday rolls around, they will shriek till their vocal chords are destroyed, for the agony is unendurable (their vocal chords will be healed before Wednesday, so they shall only suffer on Tuesday). They shall cry out for death, yet none shall be afforded to them.

Yet they already know this. However, they simply do not care what happens to them on Tuesday. They do not dissociate from their Tuesday self — they think they’re the same person as their Tuesday self. However, they just don’t care what happens to themself on Tuesday.

Now you might be tempted to imagine that they don’t actually mind what happens on Tuesday — after all, they’re indifferent to what happens on Tuesday. This misses the case; they are only indifferent to what happens on future Tuesdays. When Tuesday rolls around, they will fiercely regret their decision. Yet after Tuesday is done, they will be glad that they made the decision — after all, they don’t care what happens on a future Tuesday. We can even stipulate that when it’s Tuesday, they’re hypnotized to believe it’s a Monday, so their suffering feels from the inside exactly and precisely as it would were it experienced on Monday.

This person with indifference to future Tuesdays is clearly making an error. This is not a minor, menial error. In fact, this is certainly the gravest error in human history — one which inflicts more misery than any other. However, the anti-realist must insist that, not only is it not the greatest error in human history, it isn’t an error at all.

After all, the person is making no factual error — they are perfectly aware that they will suffer on a future Tuesday. On the anti-realist account, where lies their error. They know they will suffer, yet they do not care — the suffering will be on a Tuesday.

Only the moral realist can account for their error — for their irrationality and great foolishness in aiming at unfathomable misery on Tuesday, rather than a pinprick on Monday. On the realist account — or at least the sensible realist account, no doubt some crazy natural law theorists would deny this — we all have reason to avoid future agony. This explains why it would be an error to subject oneself to infinite torture on a Tuesday. The fact that it’s a Tuesday gives one no reason to discount their suffering.

Now the anti-realist could try to avoid this by claiming that a decision is irrational if one will regret it. However, this runs into three problems.

First, if anti-realism is true then we have no desire independent reason to do things. It doesn’t matter if we’ll regret them. Thus, regrettably, this criteria fails. Second, by this standard both getting the pinprick on a single Monday and the hellish torture on Tuesday would be irrational, because the person who experiences them will regret each of them at various points. After all, on all days of the week except Tuesday, they’d regret making the decision to endure a Monday pinprick. Third, even if by stubbornness they never swayed in their verdict, that would in no way change whether they choose rightly.

2 Picking Grass: Suppose a person hates picking grass — they derive no enjoyment from it and it causes them a good deal of suffering. There is no upside to picking grass, they don’t find it meaningful or causing of virtue. This person simply has a desire to pick grass. Suppose on top of this that they are terribly allergic to grass — picking it causes them to develop painful ulcers that itch and hurt. However, despite this, and despite never enjoying it, they spend hours a day picking grass.

Is the miserable grass picker really making no error? Could there be a conclusion more obvious than that the person who picks grass all day is acting the fool — that their life is really worse than one whose life is brimming with meaning, happiness, and love?

3 Left Side Indifference: A person is indifferent to suffering that’s on the left side of their body. They still feel suffering on the left side of their body just as vividly and intensely as it would be on the right side of their body. Indeed, we can even imagine that they feel it a hundred times more vividly and intensely — it wouldn’t matter. However, they do not care about the left side suffering.

It induces them to cry out in pain, it is agony after all. But much like agony that one endures for a greater purpose, the agony one endures on a run say, they do not think it is actually bad. Thus, this person has a blazing iron burn the left side of their body from head to toe, inflicting profound agony. They cry out in pain as they do it. On the anti-realist account, they’re acting totally rationally. Yet that’s clearly crazy!

4 Four-Year-Old Children: Suppose that — and this is not an implausible assumption — there’s a four-year-old child who doesn’t want to go into a Doctor’s office. After all, they really don’t like shots. This child is informed of the relevant facts — if they don’t go into the Doctor’s office, they will die a horribly painful death of cancer. You clearly explain this to them so that they’re aware of all the relevant facts. However, the four-year-old still digs in their heels (I hear they tend to do that) and refuses categorically to go into the Doctor’s office.

It’s incredibly obvious that the four-year-old is being irrational. Yet they’ve been informed of the relevant facts and are acting in accordance with their desires. So on anti-realism, they’re being totally rational.

5 Cutting: Consider a person who is depressed and cuts themself. When they do it, they desire to cut themself. It’s not implausible that being informed of all the relevant facts wouldn’t make that desire go away. In this case, it still seems they’re being irrational.

6 Consistent Anorexia: A person desires to be thin even if it brings about their starvation. This brings them no joy. They starve themself to death. It really seems that they’re being irrational.

7 A person had consensual homosexual sex. They then become part of a religious cult. This religious cult doesn’t have any factual mistakes, they don’t believe in god. However, they think that homosexual sex is horrifically immoral and those who do it deserve to suffer, just as a base moral principle. On the anti-realist account, not only are they not mistaken, they would be fully rational to endure infinite suffering because they think they deserve it.

8 A person wants to commit suicide and know all the relevant facts. Their future will be very positive in terms of expected well-being. On anti-realism, it would be rational to commit suicide.

9 A person is currently enduring more suffering than anyone ever has in all of human history. However, while this person doesn’t enjoy suffering — they experience it the same way the rest of us do, they have a higher order indifference to it. While they hate their experience and cry out in agony, they don’t actually want their agony to end. They don’t care on a higher level. On this account, they have no reason to end their agony. But that’s clearly implausible.

10 A person doesn’t care about suffering if it comes from their pancreas. Thus, they’re in horrific misery, but it comes from their pancreas so they do nothing to prevent it, instead preventing a miniscule amount of non-pancreas agony. On anti-realism, they’ve made no error. But that’s crazy!

4 The Discovery Argument

One of the arguments made for mathematical platonism is the argument from mathematical discovery. The basic claim is as follows; we cannot make discoveries in purely fictional domains. If mathematics was invented not discovered, how in the world would we make mathematical discoveries? How would we learn new things about mathematics — things that we didn’t already know?

Well, when it comes to normative ethics, the same broad principle is true. If morality really were something that we made up rather than discovered, then it would be very unlikely that we’d be able to reach reflective equilibrium with our beliefs — wrap them up into some neat little web.

But as I’ve argued at great length, we can reach reflective equilibrium with our moral beliefs — they do converge. We can make significant moral discovery. The repugnant conclusion is a prime example of a significant moral discovery that we have made.

Thus, there are two facts about moral discovery that favor moral realism.

First, the fact that we can make significant numbers of non-trivial moral discoveries in the first place favors it — for it’s much more strongly predicted on the realist hypothesis than the anti-realist hypothesis.

Second, the fact that there’s a clear pattern to the moral convergence. Again, this is a hugely controversial thesis — and if you don’t think the arguments I’ve made in my 36-part series are at least mostly right, you won’t find this persuasive. However, if it turns out that every time we carefully reflect on a case it ends up being consistent with some simple pattern of decision-making, that really favors moral realism.

Consider every other domain in which the following features are true.

1 There is divergence prior to careful reflection.

2 There are persuasive arguments that would lead to convergence after adequate ideal reflection.

3 Many people think it’s a realist domain

All other cases which have those features end up being realist. This thus provides a potent inductive case that the same is true of moral realism.

5 The argument from phenomenal introspection

Credit to Neil Sinhababu for this argument.

If we have an accurate way of gaining knowledge and this method informs us of moral realism, then this gives us a good reason to be a moral realist, in much the same way that, if a magic 8 ball was always right, and it informed us of some fact, that would give us good reason to believe the fact.

Neil Sinhababu argues that we have a reliable way to gain access to a moral truth — this way is phenomenal introspection. Phenomenal introspection involves reflecting on a mental state and forming beliefs about what its like. Here are examples of several beliefs formed through phenomenal introspection.

My experience of the lemon is brighter than my experience of the endless void that I saw recently.

My experience of the car is louder than my experience of the crickets.

My experience of having my hand set on fire was painful.

We have solid evolutionary reason to expect phenomenal introspection to be reliable — after all, beings who are able to form reliable beliefs about their mental states are much more likely to survive and reproduce than ones that are not. We generally trust phenomenal introspection and have significant evidence for its reliability.

Thus, if we arrive at a belief through phenomenal introspection, we should trust it. Well, it turns out that through phenomenal introspection, we arrive at the belief that pleasure is good. When we reflect on what it’s like to, for example, eat tasty food, we conclude that it’s good. Thus, we are reliably informed of a moral fact.

Lance Bush has written a response to an article I wrote about this argument; I’ll address his response here.

I summarize Sinhababu’s argument as follows.

Premise 1: Phenomenal introspection is the only reliable way of forming moral beliefs.

Premise 2: Phenomenal introspection informs us of only hedonism

Conclusion: Hedonism is true…and pleasure is the only good.

However, we can ignore premise one, because it serves as a reason other methods are unreliable — not as a reason phenomenal introspection is reliable. Lance says

I have a lot of concerns with (1), given that I don’t know what is meant by a “moral belief”

I take a moral belief to be a belief about what is right and wrong, or what one should or shouldn’t do, or about what is good and bad. Morality is fundamentally about what we have impartial reason to do, independent of our desires. For more on this definition, I’d recommend reading Parfit’s On What Matters.

I’d also note that it’s strange to frame P1 as a claim about a reliable way to form moral beliefs, since “reliable” doesn’t seem connected to whether the beliefs in question are true or not. After all, one can have a system that “reliably” (in some sense) produces false beliefs. This premise might be rephrased as something like “Phenomenal introspection is the only way to reliably form true moral beliefs” or something like that. I’m not sure; perhaps Bentham’s bulldog could update or refine the premises in a future post or in a response to this post.

By reliable, I meant reliably true.

However, my initial reaction is to reject (2) because it seems like Sinhababu overestimates what kinds of information is available via introspection on one’s phenomenology, at least not without bringing in substantial background assumptions that aren’t themselves part of the experience or that might have a causal influence on the nature of the experience. It’s possible, for instance, that a commitment to or sympathy towards moral realism can influence one’s experiences in such a way that those experiences seem to confirm or support one’s realist views, when in fact it’s one’s realist views causing the experience. Since people lack adequate introspective access to their unconscious psychological processes, introspection may be an extraordinarily unreliable tool for doing philosophy.

Lance here criticizes some types of introspection — however, none of this is phenomenal introspection. People are good at forming reliable beliefs about their experiences, less good at forming reliable beliefs about, for example, their emotions. Not all introspection is alike.

Philosophers may think that they can appeal to theoretically neutral “seemings” to build philosophical theories, but not appreciate that the causal linkages cut both ways, and that their philosophical inclinations, built up over years of studying academic philosophy, can influence how they interpret their experiences, and do so in a way that isn’t introspectively accessible. If this does occur (and I suspect it not only does, but is ubiquitous), philosophers who appeal to how things seem to support their philosophical views are, effectively, appealing to their commitment to their philosophical positions as evidence in support of their commitment to their philosophical positions. Without a better understanding of the psychological processes at play in philosophical account-building, philosophers strike me as being in an epistemically questionable situation when they so confidently appeal to their philosophical intuitions and seemings.

I think this objection to phenomenal conservatism is wrong. One can reject a seeming. For example, to me, the conclusion I describe here seems wrong, however, I end up accepting it upon reflection, because the balance of seemings supports it.

But we can table this discussion because Sinhababu doesn’t rely on seemings — he relies on phenomenal introspection.

Phenomenology involves access to what your experiences are like, but it is not constituted by any substantive philosophical inferences about those experiences. That is, if I have, say, an experience of something seeming red, it isn’t (and I think it couldn’t) be a feature of that experience that the redness of the red is, e.g., of such a kind so as to be directly (perhaps “non-inferentially”) inconsistent with a particular model of perception or consciousness. For instance, I don’t think substance dualism could be something one has phenomenal access to, but rather it would be an inference, or position one takes, that explains one’s experiences or may be inferred from one’s experiences.

No disagreement so far.

When I have good or enjoyable experiences, my phenomenology involves what I’d call positive affective states. I don’t think anything about these states includes, as a feature of the experience itself, that the experience itself involves stance-independence or stance-independence about the goodness of the experience. That doesn’t seem like the sort of thing that could be a feature of one’s phenomenology. The notion that phenomenal introspection informs us of hedonism thus strikes me almost as a kind of category error. Substantive metaphysical theses don’t seem like the sorts of things one can experience. And thus the notion that hedonism is true in a stance-independent way just isn’t the kind of thing that I think one could experience, since it’s a metaphysical thesis, not e.g., a phenomenal property (though as an aside I don’t even think there are phenomenal properties, but that’s a separate issue).

I agree that generally introspecting on experiences doesn’t inform us of their mind-independent goodness. But if we introspect on experiences that we don’t want but are pleasurable, they still feel good, showing that their goodness doesn’t depend on our desires.

Second, nothing about the phenomenology of my positive affective states is distinctively moral. If I eat my favorite food or listen to music I like, I enjoy these experiences, but they aren’t moral experiences. As such, I see no reason to think that my good and bad experiences reflect any kind of distinctively moral reality. It’s not a feature of my positive experiences that they are morally good. I don’t even know what that means, and I am confident no compelling account from any philosopher will be forthcoming.

But when you reflect on pleasure it feels good in a way that seems to give one a reason to promote it — to produce more of it. This is a distinctly moral notion. Sinhababu has a longer section on this in his paper — his account is somewhat different from mine.

Even if pleasure were “good,” and I do think positive experiences are good (in an antirealist sense), nothing about these experiences strikes me as morally good. I don’t think there is any principled distinction between moral and nonmoral norms. I think the very notion of morality is a culturally constructed pseudocategory, not a legitimate category in which normative and evaluative concepts could subsist independent of the idiosyncratic tendency for certain linguistic communities to refer to them as “moral.” So it’s not clear to me how my positive experiences relate in any meaningful way to the culturally constructed notion of moral good that persists in contemporary analytic philosophy.

Pleasure feels good in the sense that it’s desirable, worth aiming at, worth promoting. If this argument successfully establishes that pleasure is worth promoting, then it has done all that it needs to do. I don’t think morality is anything over and above a description of the things that are well and truly worth promoting.

I don’t think any of my experiences involve any distinctively moral phenomenology, and such experiences are better explained in nonmoral terms. I’d note, however, that the notion that “hedonism is true” doesn’t make clear that hedonism is the true moral theory which isn’t explicitly stated here. I don’t know if Sinhababu (or BB, or anyone else) claims to have distinctively moral phenomenology, but I don’t think that I do, and I’m skeptical that anyone else does.

This question is ambiguous, but I think the answer would be no.

In any case, if this remark: “Therefore, hedonism is true — pleasure is the only good,” … is meant to convey the notion that hedonism is true in a way indicative of moral realism, I still I am very confident that it doesn’t mean anything; that is, I think this is literally unintelligible. I find my experiences to be good, in that I consider them good, but I don’t think this in any way indicates that they are good independent of me considering them as such, nor do I think this even makes any sense.

I’d have a few things to say here.

1 It seems that most people have an intuitive sense of what it means to say something is wrong. This normal usage acquaintance is going to be more helpful than some formulaic definition that appears in a dictionary.

2 This seems rather like denying that there’s knowledge on the grounds that we don’t have a good definition of it. Things are very difficult to define — but that doesn’t mean we can’t be confident in our concepts of them. Nothing is ever satisfactorily defined.

3 I take morality to be about what we have impartial reason to aim at. In other words, what we’d aim at if we were fully rational and impartial.

Bush quotes me saying the following.

“Phenomenal introspection involves reflecting on experiences and forming beliefs about what they’re like (e.g. I conclude that my yellow wall is bright and that itching is uncomfortable).”

He responds.

But the latter isn’t part of phenomenal introspection. Only the former is. Phenomenal introspection involves reflecting on your experiences such that you have the appearance of a bright yellow wall and the sense of an itch; the beliefs you form about these experiences aren’t part of the phenomenal introspection; they’re just standard philosophical reflection, or theory-building, that seeks to account for those experiences. And while we’re all welcome to engage in such theorizing, it’s a mistake to say that those beliefs are part of phenomenal introspection itself, or that you form beliefs about what those experiences are like; what you describe instead seem like inferences about what’s true given those experiences. And such inferences aren’t part of the phenomenology.

The beliefs about what they’re like are beliefs about the experience. So, for example, the belief that hunger is uncomfortable is reliably formed through phenomenal introspection.

There are other difficulties with BB’s framing here:

Premise 2 is true — when we reflect on pleasure we conclude that it’s good and that pain is bad.

This is ambiguous. What does BB mean by ‘good’ and ‘bad’? Since I understand these in antirealist terms, if Premise 2 is taken to imply that they’re true in a realist sense, then I simply deny the premise. I find it odd and disappointing that BB would echo the common tendency for philosophers to engage in such ambiguous claims. BB knows as well as I do that one of the central disputes in metaethics is between realism and antirealism. So why would BB present a premise that only includes, on the surface, normative claims, without making the metaethical presuppositions in the claim explicit?

This was responded to above — when we reflect on pain we conclude that it’s the type of thing that’s worth avoiding, that there should be less of. We conclude this even in cases when we want pain. To give an example, I recall when I was very young wanting to be cold for some reason. I found that it still felt unpleasant, despite my desire to brave the cold.

This particular ambiguity is especially common in metaethics, and its proliferation has a clear and perfidious rhetorical value: moral realists often present normative claims, e.g., “x is good” or “it’s wrong to torture babies for fun,” without making their metaethical presuppositions explicit, e.g., “x is stance-independently good” or “it’s objectively wrong to torture babies for fun.” Yet these normative claims serve as the premises to arguments that presuppose realism, or that are intended as arguments for realism, or are intended to prompt intuitions against antirealism and in favor of realism. All of these uses are illegitimate, because they rely on the inappropriate pragmatic implicature that to reject the premise or the claim isn’t merely to reject its metaethical component (which has been concealed), but the normative claim itself.

Earlier in this article I was more precise and clarified the things that the anti-realist is committed to.

The other problem with this remark is the claim that when “we” reflect on pleasure we conclude that it’s good and that pain is bad. Who’s “we”? Not me, certainly. I don’t reach the same conclusions as BB does via introspection. BB echoes yet another bad habit of contemporary analytic philosophers: making empirical claims about how other people think without doing the requisite empirical work. BB does not have any direct access to what other people’s phenomenology is like, so there’s little justification in making claims about what things are like for other people in the absence of evidence. And there’s little empirical evidence most people claim to have phenomenology that lends itself to moral realism.

I think Lance does — he’s just terminologically confused. When he reflects on his pain, he concludes it’s worth avoiding — that’s why he avoids it! I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable.

6 Responding to Objections

A Disagreement

One common objection to moral realism is the argument from disagreement. The basic version is as follows.

Premise 1: If some domain has disagreement, then it only establishes subjective truths

Premise 2: The moral domain has disagreement

Therefore, it only establishes subjective truths

Problem: Premise 1 is obviously false. The domain of physics, mathematics, and numerous others garner lots of disagreement. They also are objective.

There are lots of more robust arguments from disagreement — however, I think the best paper on this subject by Enoch decisively refutes them.

B Access

Some worry about how we have access to the moral facts. Enoch puts these worries to rest decisively.

I think we can rather safely postpone discussion of these worries to the following subsections, without saying much more on epistemic access. This is not just because one way of understanding talk of epistemic access is as an unofficial introduction to one of the other ways of stating the challenge, or because as they stand, worries about epistemic access are too metaphorical to be theoretically helpful (it isn’t clear, after all, what ‘‘access’’ exactly means here). The more important reason why we can safely avoid further discussion of the worry put in terms of epistemic access is the following. In the following subsections, I discuss versions of the epistemological worry put in terms of justification, reliability, and knowledge. It is possible, of course, that my arguments there fail. But if they do not, what remaining epistemological worry could talk of epistemic access introduce? If in the next subsections I manage to convince you that there are no special problems with the justification of normative beliefs, with the reliability of normative beliefs, or with normative knowledge, it seems to me you should be epistemologically satisfied. I do not see how talk of epistemic access should make you worried again

Enoch similarly describes why epistemic challenges for moral realism shouldn’t be thought of in terms of justification, reliability, or knowledge. I’d recommend the full paper for an explanation of this.

C Correlation

Enoch thinks the most puzzling version of epistemological objections don’t focus on any of the things above — instead, they focus on a puzzling correlation. This correlation is between the correct moral views and the moral things we happen to believe. Enoch says

Suppose that Josh has many beliefs about a distant village in Nepal. And suppose that very often his beliefs about the village are true. Indeed, a very high proportion of his beliefs about this village are true, and he believes many of the truths about this village. In other words, there is a striking correlation between Josh’s beliefs about that village and the truths about that village. Such a striking correlation calls for explanation. And in such a case there is no mystery about how such an explanation would go—we would probably look for a causal route from the Nepalese village to Josh (he was there, saw all there is to see and remembers all there is to remember, he read texts that were written by people who were there, etc.). The reason we are so confident that there is such an explanation is precisely that the striking correlation is so striking—absent some such explanation, the correlation would be just too miraculous to believe. Utilizing such an example, Field (1989, pp. 25–30) suggests the following problem for mathematical Platonism: Mathematicians are remarkably good when it comes to their mathematical beliefs. Almost always, when mathematicians believe a mathematical proposition p, it is indeed true that p, and when they disbelieve p (or at least when they believe not-p) it is indeed false that p. There is, in other words, a striking correlation between mathematicians’ mathematical beliefs (at least up to a certain level of complexity) and the mathematical truths. Such a striking correlation calls for explanation. But it doesn’t seem that mathematical Platonists are in a position to offer any such explanation. The mathematical objects they believe in are abstract, and so causally inert, and so they cannot be causally responsible for mathematicians’ beliefs; the mathematical truths Platonists believe in are supposed to be independent of mathematicians and their beliefs, and so mathematicians’ beliefs aren’t causally (or constitutively) responsible for the mathematical truths. Nor does there seem to be some third factor that is causally responsible for both. What we have here, then, is a striking correlation between two factors that Platonists cannot explain in any of the standard ways of explaining such a correlation—by invoking a causal (or constitutive) connection from the first factor to the second, or from the second to the first, or form some third factor to both. But without such an explanation, the striking correlation may just be too implausible to believe, and, Field concludes, so is mathematical Platonism. Notice how elegant this way of stating the challenge is: There is no hidden assumption about the nature of knowledge, or of epistemic justification, or anything of the sort. There is just a striking correlation, the need to explain it, and the apparent unavailability of any explanation to the challenged view in the philosophy of mathematics.

On this, several points are worth making.

1 As Enoch points out, this is an explanatory game, so it makes sense to compare the explanatory adequacy of the theories holistically, and see if the best ones favor realism.

2 Also pointed out by Enoch, many people are in error, so the correlation isn’t that striking — it’s not as though there’s perfect correlation.

3 Our reasoning can weed out lots of views that are inconsistent — so that narrows the pool even more.

I’d also note

4 The correlation is not that striking — the correct moral view which seems to be hedonistic act utilitarianism is often wildly unintuitive.

5 Most of our beliefs tend to be right. Thus, based purely on priors, we’d expect the same broad pattern to be true when it comes to our moral beliefs.

6 The same broad arguments can be made against epistemic realism — it’s why there’s the correlation in that case too — but this doesn’t debunk our epistemic beliefs.

D Evolutionary Debunking

Street famously argued that our moral beliefs are evolutionarily debunkable — we formed them for evolutionary reasons, independent of their truth, so we shouldn’t believe them.

First, as Sinhababu points out, we’d expect evolution to make us reliable judges of our conscious experience. Belief in the badness of pain resists debunking because it’s formed through a mechanism that would evolve to be reliable. Much like beliefs about vision aren’t debunkable, neither are beliefs about our mental states, given that beings who can form accurate beliefs about their mental states are more likely to survive.

Second, as Bramble (2017) points out, evolution just requires that pain isn’t desired, it doesn’t require the moral belief that the world would be better if you didn’t suffer. Given this, there is no way to debunk normative beliefs about the badness of pain.

Third, there’s a problem of inverted qualia. As Hewitt (2008) notes, it seems eminently possible to imagine a being who sees red as blue and blue as red, without having much of a functional change. However, it seems like undesirability rigidly designates pain, such that you couldn’t have a being with an identical qualitative experience of pain, who seeks out and desires pain. This means that the badness and correlated undesiredness of pain is a necessary feature, not subject to evolutionary change.

One could object that there are many people like sadists who do, in fact, desire pain. However, when sadists are in pain, the experience they gain is one they find pleasurable. This is not a counterexample to the rule, so much as one that shows that experiences can have many features in common with pain, while lacking its intrinsic badness. A decent analogy here would be food—eating the same food at different times will produce different results, even with the same general taste. If one finds a food disgusting, their experience of eating it will be bad. Traditionally painful experiences are similar in this regard—closely related experiences can actually be desirable.

Fourth, evolution can’t debunk the direct acquaintance we have with the badness of pain, any more than it could debunk the belief that we’re conscious. Much like I have direct access to the fact that I’m conscious, I similarly have direct access to the badness of pain. After I stub my toe my conviction is much greater that the pain was bad than it is in the external world.

Fifth, it’s plausible that beings couldn’t be radically deluded about the quality of their hedonic experiences, in much the same way they can’t be deluded about whether or not they’re conscious. It seems hard to imagine an entity could have an experience of suffering but want more of it.

Sixth, there’s a problem of irreducible complexity. Pain only serves an evolutionary advantage if it’s not desired when experienced. Thus, the experience evolving by itself would do no good. Similarly, a mutation that makes a being not want to be in pain would do no good, unless it already feels pain. Both of those require the other one to be useful, so neither would be likely to emerge by themselves. However, only the intrinsic badness of pain which beings have direct acquaintance with can explain these two emerging together.

Seventh, evolution gave us the ability to do abstract, careful reasoning. This reasoning leads us to form beliefs about moral facts, in much the same way it does for mathematical facts.

E Explanatorily Unnecessary

People often object to moral realism on the grounds that the moral facts are explanatorily unnecessary. The earlier comments apply — positing real moral facts explains the convergence, for example, in our moral views. It also explains our moral seemings — seemings that inform us that, for example, it’s wrong to torture infants for fun and would be so even if nobody thought that it was.

F Objectionably Queer

Ever since the time of Mackie, it’s been objected that moral realism is objectionably queer, something about it is strange. However, it’s pretty unclear what exactly about it is supposed to be so strange. As Taylor says

Firstly, there is ‘the metaphysical peculiarity of the supposed objective values, in that they would have to be intrinsically action-guiding and motivating’; related to this is ‘the problem how such values could be consequential or supervenient upon natural features’ of the world (p. 49)

However, it’s not clear why exactly this is so queer. As Huemer notes, many things are very different from everything else. Time is very different from other things, as is space, as are laws of physics — but we shouldn’t give up our belief in those things.

On top of this, it’s not clear why normativity is queer. There seem to be other things that are irreducibly normative — epistemic normativity seems on firm ground. One who believes the earth is flat on the basis of the available evidence is objectively making an epistemic error and, in an epistemic sense, they ought to change their views. None of this seems too queer.

Mackie just describes what morality is, before declaring that it’s too queer.

If you look at the attitudes of most everyday people towards the notion that it’s really wrong to torture infants for fun — it doesn’t seem strange at all to them.

Additionally, if one is too concerned about queerness, I think hedonism gives a particularly promising route for avoiding such worries. To quote my book.

There are several ways the the hedonic facts resist the charge of being objectionably queer. The first one is that our mental states are already very queer. If one assessed the odds that a universe made up of particles and waves, matter and energy, could sustain the smorgasbord of truly bizarre mental states that exist—that some mental states were normative would be one of the least surprising facts. If we start with the fundamental strangeness that there’s any consciousness at all—somehow generated by neurons—and then we combine that with the bizarreness of the following mental states: color qualia—particularly when we consider that there are color qualia that no human will ever see but that non-human animals have seen, psychedelic experiences, the intrinsic motivation that comes with the experience of desire, the strangeness of taste qualia, and the fact that there are literal entire dimensions that we will never experience.

Once we become accustomed to these mental states, it’s very easy to no longer appreciate just how strange they are. Yet if we imagine what the mental states that we haven’t experienced must be like—for example the experience of a bat using nociception or of experiencing four dimensional objects, it becomes clear just how miraculous and bizarre our conscious experiences are. Thus, if something as strange as value were to lurk anywhere in the universe, the obvious place for it to be would be part of experience, alongside its equally strange brethren.

Yet there’s another account of why normative qualia wouldn’t be objectionably strange—namely, that the supposedly strange feature of qualia, their normativity, is something that we commonly accept. Every time a person makes a decision on account of something they know, they are treating their mental states as normative—they take particular facts or experiences of which they’re aware to count either for or against an act.

Take one simple example—when one puts their hand on a hot stove, they pull away rapidly. Something about the feeling of the stove seems to urge that one remove their hand from the stove—immediately!!

Indeed, anti-realists commonly accept that desires have reason giving force. However, if desires—a type of mental states—can have reason giving force, there seems no reason in principle that valenced qualia can’t have reason giving force.

Street (2008) provides a constructivist account of reasons—arguing we evolved to have a feeling of ‘to be doneness’. When one’s hand is on a hot stove, however, not only do they have a feeling of ‘to be avoidedness’ but that feeling seems to be fitting. Were they fully rational, that feeling wouldn’t go away. That’s because it’s a substantive property of some mental states—including the one experienced when one’s hand is on a hot stove—that they are simply worth avoiding.


Given the immense debate about moral realism, in this article, I have not been able to cover all of the relevant articles and arguments. However, I think I’ve summarized many of the main reasons to be a moral realist — some of which have, to the best of my knowledge, yet to be explored in the literature.

These arguments have been unapologetically pro hedonist. This is because I think that the challenges from the anti-realists to the hedonists are far weaker than they are for other moral realist views.

2 Non-physicalism about consciousness

0 A Brief Introduction

Why is there something rather than nothing? This question is quite difficult—perhaps even as difficult as the hard problem of consciousness. However, let’s consider some clearly terrible answers to the question.

There isn’t—something is an illusion.

Something is a weakly emergent property of nothing. When you have nothing for a little while, it combines to form something. Science will soon explain how nothing becomes something. Positing that there’s something that exists and is not reducible to nothing is like vitalism or phlogiston.

But, these answers are quite similar structurally and just as unsuccessful functionally as many “solutions,” to the hard problem of consciousness. In this blog post, I shall spell out why physicalist solutions to the hard problem fail—and why we need to be some type of dualist, idealist, or panpsychist.

Dispositionally, I’m an ardent physicalist. My intuitive, pre-theoretic leanings are strongly physicalist. However, when confronted with a brutal gang of facts, I was forced to abandon my physicalist leanings. This article draws heavily on the arguments of Chalmers in the conscious mind—definitely worth checking out for those who have not yet read it.

Let’s begin by defining physicalism. The SEP writes

Physicalism is, in slogan form, the thesis that everything is physical.

1 Broad Considerations

“Consciousness is a biological phenomena,”

—John Searle, being wrong.

So why do I think that physical stuff cannot even in principle explain consciousness. Well, there are two closely related higher order considerations, and then some more specific arguments.

The first broad consideration which explains why consciousness resists physicalist reduction is that physics explains things in terms of structure and function, as Chalmers notes. Physics gives equations to describe what things do and what they’re composed of. However, this cannot in principle explain what it’s like to eat a strawberry, see the color red, be in love. When we look at an atom, we have no way of verifying whether or not it is conscious, because we only observe its causal impacts.

So this is not analogous to Phlogiston or vitalism or anything else physicalists use as an analogy for consciousness. All of those are broadly explicable in terms of structure and function, and thus they don’t require any extra laws. Consciousness is different—it’s not in principle explainable in terms of structure and function.

A second related broad consideration which has been expressed eloquently by Kastrup is that material stuff can be exhaustively explained quantitatively. Through physics, we get a series of equations. To quote Kastrup

Chalmers basically said that there is nothing about physical parameters – the mass, charge, momentum, position, frequency or amplitude of the particles and fields in our brain – from which we can deduce the qualities of subjective experience. They will never tell us what it feels like to have a bellyache, or to fall in love, or to taste a strawberry. The domain of subjective experience and the world described to us by science are fundamentally distinct, because the one is quantitative and the other is qualitative.

2 Zombies

The most obvious way (although not the only way) to investigate the logical supervenience of consciousness is to consider the logical possibility of a zombie: someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether. 1 At the global level, we can consider the logical possibility of a zombie world: a world physically identical to ours, but in which there are no conscious experiences at all. In such a world, everybody is a zombie.

So let us consider my zombie twin. This creature is molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely. (Some might prefer to call a zombie ″it,” but I use the personal pronoun; I have grown quite fond of my zombie twin.) To fix ideas, we can imagine that right now I am gazing out the window, experiencing some nice green sensations from seeing the trees outside, having pleasant taste experiences through munching on a chocolate bar, and feeling a dull aching sensation in my right shoulder.

—David Chalmers

An adequate model of physics will be able to describe what physically goes on in your brain. However, we can imagine a physical carbon copy of you that lacks consciousness. This shows consciousness is not purely physical, as we can’t imagine a carbon copy of H20 that isn’t water.

One confusion had by many is that the zombie argument presumes some type of epiphenomenalism, the notion that consciousness has no physical effect. This is false. If consciousness has a physical effect, the zombie would have some other law of physics fill in and play the functional role of consciousness. So if consciousness causes me to say things like “I’m conscious,” “I think therefore I am,” “consciousness poses a hard problem,” “Dan Dennett might be a zombie,” “consciousness can’t be explained reductively,” “Okay—Dennett is definitely a zombie,” etc—the zombie world would have some physically identical force fill in the functional role of consciousness and cause me to say all of those things.

Thus, the argument is as follows.

1 A being could be physically identical to me but could not be conscious

2 Two beings that are physically identical must have all physical properties in common

Therefore, consciousness is not a physical property.

There’s much more that can be said on the topic of zombies, however, to me it seems quite obvious that zombies are possible—those who deny their possibility seem conceptually confused to be. No doubt that’s how I seem to them. Yet I haven’t the time in this article to go into all of the accounts of the alleged impossibility of zombies, yet it’s worth noting that zombies are somewhat controversial.

3 Inverted Qualia

Even in making a conceivability argument against logical supervenience, it is not strictly necessary to establish the logical possibility of zombies or a zombie world. It suffices to establish the logical possibility of a world physically identical to ours in which the facts about conscious experience are merely different from the facts in our world, without conscious experience being absent entirely. As long as some positive fact about experience in our world does not hold in a physically identical world, then consciousness does not logically supervene.

It is therefore enough to note that one can coherently imagine a physically identical world in which conscious experiences are inverted, or (at the local level) imagine a being physically identical to me but with inverted conscious experiences. One might imagine, for example, that where I have a red experience, my inverted twin has a blue experience, and vice versa. Of course he will call his blue experiences ″red,” but that is irrelevant. What matters is that the experience he has of the things we both call “red”—blood, fire engines, and so on—is of the same kind as the experience I have of the things we both call “blue,” such as the sea and the sky.


If consciousness just is a physical phenomena, then it would be impossible to change conscious experiences without making a physical change. However, it seems eminently metaphysically possible that we could change consciousness but not make a physical change. Imagine a world physically identical to ours but in which one tomato that I see appears 1% redder than it does currently. If you think that world is possible, then consciousness is not purely physical.

Note, I’m perfectly willing to grant that based on the world as it currently exists, such a state would be impossible. There are, in my view, psychophysical laws that govern consciousness which make it so that consciousness can’t be different. However, we could make tweaks to those laws without having a physical effect, which shows consciousness is not physical.

4 Epistemic Asymmetry

Argument 3: From Epistemic Asymmetry As we saw earlier, consciousness is a surprising feature of the universe. Our grounds for belief in consciousness derive solely from our own experience of it. Even if we knew every last detail about the physics of the universe —the configuration, causation, and evolution among all the fields and particles in the spatiotemporal manifold —that information would not lead us to postulate the existence of conscious experience. My knowledge of consciousness in the first instance, comes from my own case, not from any external observation. It is my first-person experience of consciousness that forces the problem on me.

From all the low-level facts about physical configurations and causation, we can in principle derive all sorts of high-level facts about macroscopic systems, their organization, and the causation among them. One could determine all the facts about biological function, and about human behavior and the brain mechanisms by which it is caused. But nothing in this vast causal story would lead one who had not experienced it directly to believe that there should be any consciousness. The very idea would be unreasonable; almost mystical, perhaps.

It is true that the physical facts about the world might provide some indirect evidence for the existence of consciousness. For example, from these facts one could ascertain that there were a lot of organisms that claimed to be conscious, and said they had mysterious subjective experiences. Still, this evidence would be quite inconclusive, and it might be most natural to draw an eliminativist conclusion—that there was in fact no experience present in these creatures, just a lot of talk

You guessed it.

If consciousness were a reductively explainable physical property, then we’d be able to deduce its existence from knowledge of the lower level facts. However, this is manifestly impossible in the case of consciousness. If you knew everything abut atoms, you’d be able to deduce the existence of fire and explain what it does. However, nothing about consciousness is evident from low level descriptions of physical systems.

Why do you think others are conscious? Well, the reason is because you know you’re conscious and others plausibly have similar features to the ones that make you conscious. However, this is not how we deduce that others can get sick. Rather, we directly observe others getting sick. Even if we were in perfect health, it would be reasonable to infer that others get sick. However, if you were not conscious, it would not be reasonable to infer others were conscious. This is because consciousness is not explainable by low level physical facts.

5 The Knowledge Argument

The Knowledge Argument

The most vivid argument against the logical supervenience of consciousness is suggested by Jackson (1982), following related arguments by Nagel (1974) and others. Imagine that we are living in an age of a completed neuroscience, where we know everything there is to know about the physical processes within our brain responsible for the generation of our behavior. Mary has been brought up in a black-and-white room and has never seen any colors except for black, white, and shades of gray. 7 She is nevertheless one of the world’s leading neuroscientists, specializing in the neurophysiology of color vision. She knows everything there is to know about the neural processes involved in visual information processing, about the physics of optical processes, and about the physical makeup of objects in the environment. But she does not know what it is like to see red. No amount of reasoning from the physical facts alone will give her this knowledge.

It follows that the facts about the subjective experience of color vision are not entailed by the physical facts. If they were, Mary could in principle come to know what it is like to see red on the basis of her knowledge of the physical facts. But she cannot. Perhaps Mary could come to know what it is like to see red by some indirect method, such as by manipulating her brain in the appropriate way. The point, however, is that the knowledge does not follow from the physical knowledge alone. Knowledge of all the physical facts will in principle allow Mary to derive all the facts about a system’s reactions, abilities, and cognitive capacities; but she will still be entirely in the dark about its experience of red.

—Guess who!

If consciousness were a reductively explainable physical property then knowing all of the facts about the brain would make it possible to know what it’s like to see red, despite being color blind. However, this is clearly impossible. No neuroscientific knowledge can communicate what it’s like to see red, for one who has never seen red. If Mary left the room and saw a red tomato, she’d learn something new about what it’s like to see red. Her curiosity would be satisfied by seeing the color red, if she had previously wondered what it was like to see red.

No amount of neurological knowledge could teach a deaf person what it’s like to hear Mozart or a blind person what it’s like to see the grand canyon. However, if consciousness were purely physical, this would be possible. If one knows all of the facts about bricks, they could know all relevant facts about brick walls. This is because a brick wall is an emergent property of bricks. If consciousness were merely physical, then much like full physical knowledge would teach you everything there is to be known about a tumor, supernova, or ocean, the same would be true of consciousness. However, this is manifestly impossible.

6 From The Absence Of Analysis

If proponents of reductive explanation are to have any hope of defeating the arguments above, they will have to give us some idea of how the existence of consciousness might be entailed by physical facts. While it is not fair to expect all the details, one at least needs an account of how such an entailment might possibly go. But any attempt to demonstrate such an entailment is doomed to failure. For consciousness to be entailed by a set of physical facts, one would need some kind of analysis of the notion of consciousness—the kind of analysis whose satisfaction physical facts could imply—and there is no such analysis to be had.

The only analysis of consciousness that seems even remotely tenable for these purposes is a functional analysis. Upon such an analysis, it would be seen that all there is to the notion of something’s being conscious is that it should play a certain functional role. For example, one might say that all there is to a state’s being conscious is that it be verbally reportable, or that it be the result of certain kinds of perceptual discrimination, or that it make information available to later processes in a certain way, or whatever. But on the face of it, these fail miserably as analyses. They simply miss what it means to be a conscious experience. Although conscious states may play various causal roles, they are not defined by their causal roles. Rather, what makes them conscious is that they have a certain phenomenal feel, and this feel is not something that can be functionally defined away.

—Greg, just kidding, Chalmers obviously.

When we consider facts about a physical system, none of them make it obvious why those things would make it conscious. Consider, for example, the integrated information theory, which says that when one system processes a variety of different types of information, it becomes conscious, with its consciousness proportional to the amount of integrated information. When information is integrated, nothing about that physical state obviously produces consciousness. It seems like there’s a further question—we know a system has integrated information, but that doesn’t settle whether it’s conscious.

Consciousness is not just integrated information. It seems imminently possible to imagine a non conscious system that integrates information. When we identify the neural correlates of consciousness, it’s never obvious why those things would be conscious. We can understand why H20 is water, but no explanation of why the neural correlates of consciousness are consciousness.

7 Disembodied Minds

If consciousness were just a physical phenomena, then disembodied minds would be metaphysically impossible. Because heat just is the rapid movements of particles, disembodied heat is impossible. To have heat, one needs particles to move rapidly.

It would make no sense to talk about a non-physical tortoise, box, or pancreas, because these are physical phenomena. However, disembodied minds—minds without bodies—seem metaphysically possible. We could imagine mental functions going on, even in the absence of a body. This shows that consciousness isn’t a purely physical property—it could exist in the absence of physical things.

8 Some Concluding Thoughts On Why This Isn’t Vitalism

Vitalism is the notion that living organisms have some fundamental life causing non-physical substance—“Élan vital.” Many have given analogies between non-physicalism about consciousness and vitalism, as they both posit a non material thing. However, it’s worth noting that none of the arguments above can apply to vitalism.

Life just is about structure and function and can be described quantitatively—so it’s not susceptible to the first argument. A L zombie, physically identical to an alive thing but that isn’t alive, is obviously impossible. It is possible to use low level phenomena to explain life, unlike for consciousness. There’s no analogy for the inverted qualia argument. Knowing all the physical facts about a physical system would let you know whether it’s alive and all the facts about its life, there is an account of how cells replicate and comprise life, and disembodied life is obviously impossible.

The properties that were appealed to for vitalism were non-physical properties, but ones that we now know don’t exist. There’s nothing it is to be alive over and above the physical facts relating to cell replication, growth, and the other things required for life. Thus, the correct view about vitalism was illusionism—the properties being posited that Elan Vital explained weren’t real. But we know consciousness is real! It’s the most certainly known natural phenomena—we can be more certain that we’re conscious than we can be of anything else.

Abandoning physicalism isn’t abandoning an answer to the problems of consciousness—it merely recognizes the reality of what form the answer must take. Non physicalist theories are testable and make predictions which can be subsequently verified.

Sometimes, the correct answers are surprising and run afoul of our heuristics. Generally people worrying about new technology are wrong, but not when it comes to AI alignment. Usually, Parfit is right, but not when it comes to the repugnant conclusion. Preachy vegans are irritating, but they’re right. Reductionism is enticing—it would be so nice if consciousness were just some physical phenomena, but there are knockdown arguments against such a view. We mustn’t be held captive to reductionist dogma, in the face of overwhelming evidence.

Eliezer is provably wrong about zombies

I enjoy much of what Eliezer Yudkowsky says. He’s been a large part of raising worries about AI alignment, writes tons of interesting less wrong stuff, wrote the epic HPMOR, and has shaped my thinking in many ways. However, Yudkowsky is, as the title hints at, wrong about zombies.

A zombie is a being physically identical to a conscious being in every way, minus the consciousness. The important thing to note is that the zombie would, if consciousness is causally efficacious, have other things that fill in the causal roles of the person.

Yudkowsky writes

Your “zombie”, in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

It is furthermore claimed that if zombies are “possible” (a term over which battles are still being fought), then, purely from our knowledge of this “possibility”, we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is “epiphenomenalism”.

Note, when we use possibility here, we’re describing metaphysical possibility, not physical possibility. So the question is whether there is a possible world that is atom for atom identical to this world but that lacks consciousness. All of the things done by consciousness would be done by other laws that are functionally identical to consciousness in this world, but that don’t contain any experiences.

Eliezer’s claim that this view is epiphenomenalism is false. Epiphenomenalism says consciousness doesn’t cause anything. One can hold to epiphenomenalism and zombies, because the zombie world would have something else do what your consciousness does in this world.

(For those unfamiliar with zombies, I emphasize that this is not a strawman. See, for example, the SEP entry on Zombies. The “possibility” of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)

But it is a strawman!! The zombie argument doesn’t entail epiphenomenalism. It’s often made by interactionist dualists, panpsychists, and idealists. It’s frustrating that Eliezer strawman’s the arguments while specifically talking about not straw manning it. I’m not suggesting bad faith here, it’s just a bit frustrating.

When you open a refrigerator and find that the orange juice is gone, you think “Darn, I’m out of orange juice.” The sound of these words is probably represented in your auditory cortex, as though you’d heard someone else say it. (Why do I think this? Because native Chinese speakers can remember longer digit sequences than English-speakers. Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous “seven plus or minus two” for English speakers. There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)

Let’s suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies. Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about). It’s not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone’s auditory cortex and read out their internal narrative. (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)

So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes “Darn, I’m out of orange juice”. On this point, epiphenomalists would willingly agree.

But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing. The internal narrative is spoken, but unheard. You are not the one who speaks your thoughts, you are the one who hears them.

If we look inside the brain, what we see happening involves the flow of electric signals from your brain to the muscles in your arm, resulting in the refrigerator opening. The point of the zombie argument is that you could imagine a world where all of that goes on in exactly the same way—it looks precisely the same from the outside in terms of the movement of all of the atoms—but you are not conscious when it goes on.

I’m not an epiphenomenalist (my credence in it is around 10%), but the epiphenomenalists can give an explanation of this. If consciousness is just what it feels like for the brain to do things, then it feels like you’re the cause of it, but really your consciousness just is what it feels like for the brain to do things.

The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just “possible in theory”, or “imaginable”, or something along those lines—then consciousness must be extra-physical, something over and above mere atoms. Why? Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.

Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

This is false. When Chalmers is defining views about philosophy of mind, he writes

10 Type-E Dualism

Type-E dualism holds that phenomenal properties are ontologically distinct from physical properties, and that the phenomenal has no effect on the physical.[*] This is the view usually known as epiphenomenalism (hence type-E): physical states cause phenomenal states, but not vice versa. On this view, psychophysical laws run in one direction only, from physical to phenomenal. The view is naturally combined with the view that the physical realm is causally closed: this further claim is not essential to type-E dualism, but it provides much of the motivation for the view.

Obviously epiphenomenalism is different from Descartes’ dualism. Descartes was a substance dualist and interactionist. These extra views aren’t required for dualism. Zombieism, as Eliezer calls it, can be dualist or panpsychist—it just has to reject physicalism.

Something will seem possible—will seem “conceptually possible” or “imaginable”—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) …), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.

So just because you don’t see a contradiction in the Zombie World at first glance, it doesn’t mean that no contradiction is there. It’s like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility (“I don’t see a problem”) to logical possibility in the full technical sense, is a very great leap. It’s easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it’s logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

Just because you don’t see a contradiction yet, is no guarantee that you won’t see a contradiction in another 30 seconds. “All odd numbers are prime. Proof: 3 is prime, 5 is prime, 7 is prime...”

This is of course true. The question for zombies isn’t just whether we could imagine them—I could imagine fermat’s last theorem being false, but it isn’t—but whether it’s metaphysically possible that they exist.

So let us ponder the Zombie Argument a little longer: Can we think of a counterexample to the assertion “Consciousness has no third-party-detectable causal impact on the world”?

If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of “I am aware” and “My awareness is separate from my thoughts” and “I am not the one who speaks my thoughts, but the one who hears them” and “My stream of consciousness is not my consciousness” and “It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior.”

You can even say these sentences out loud, as you meditate. In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.

This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.

Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift. You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction. You can’t make the black box produce gold coins or answer questions. So you conclude that the black box is causally inactive: “For all X, the black box doesn’t do X.” The black box is an effect, but not a cause; epiphenomenal; without causal potency. In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—”Does the black box turn lead to gold? No. Does the black box boil water? No.”

But you can see the black box; it absorbs light, and weighs heavy in your hand. This, too, is part of the dance of causality. If the black box were wholly outside the causal universe, you couldn’t see it; you would have no way to know it existed; you could not say, “Thanks for the black box.” You didn’t think of this counterexample, when you formulated the general rule: “All X: Black box doesn’t do X”. But it was there all along.

(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven’t the slightest clue that it’s there in your living room. That was their joke.)

If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think “I am aware that I am aware”—and say out loud, “I am aware that I am aware”—then your consciousness is not without effect on your internal narrative, or your moving lips. You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.

I have not seen the above argument written out that particular way—”the listener caught in the act of listening”—though it may well have been said before.

I think this is a pretty good argument against epiphenomenalism. However, this does nothing to show that consciousness is physical, and it doesn’t answer the zombie argument. Consider an analogy—imagine that the cause of gravity is a god willing gravity to be so, one who is defined as being non-physical. Even though gravity is caused by the non-physical mind, we could imagine a world that’s physically identical, where gravity is caused by something else, other than the non-physical mind. Consciousness is the same.

But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World’s philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.

At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.

Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world. You can argue clever reasons why this is not so, but you have to be clever.

You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like “There is a mysterious listener within me,” because the mysterious listener would be gone. It is usually right after you focus your awareness on your awareness, that your internal narrative says “I am aware of my awareness”, which suggests that if the first event never happened again, neither would the second. You can argue clever reasons why this is not so, but you have to be clever.

But again, you could have some functional analogue that does the same physical thing that your consciousness does. Any physical affect that consciousness has on the world could be in theory caused by something else. If consciousness has an affect on the physical world, it’s no coincidence that a copy of consciousness would have to be hyper specific and cause you to talk about consciousness in exactly the same way.

One strange thing you might postulate is that there’s a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.

A Zombie Master doesn’t seem impossible. Human beings often don’t sound all that coherent when talking about consciousness. It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar. Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today’s models but not self-modifying; and get back discourse about “consciousness” that sounded as sensible as most humans, which is to say, not very.

But this speech about “consciousness” would not be spontaneous. It would not be produced within the AI. It would be a recorded imitation of someone else talking. That is just a holodeck, with a central AI writing the speech of the non-player characters. This is not what the Zombie World is about.

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are “bridging laws” that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

This is not true. As this paper notes

[A]n interactionist dualist can accept the possibility of zombies, by accepting the possibility of physically identical worlds in which physical causal gaps go unfilled, or are filled by something other than mental processes. The first possibility would have many unexplained physical events, but there is nothing metaphysically impossible about unexplained physical events. Also: a Russellian “panprotopsychist”, who holds that consciousness is constituted by the unknown intrinsic categorical bases of microphysical dispositions, can accept the possibility of zombies by accepting the possibility of worlds in which the microphysical dispositions have a different categorical basis, or none at all. (Chalmers 2004:184)

Chalmers himself notes in a comment below the original post

It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they’re really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect. I endorse Z, but I don’t endorse E: see my discussion in “Consciousness and its Place in Nature”, especially the discussion of interactionism (type-D dualism) and Russellian monism (type-F monism). I think that the correct conclusion of zombie-style arguments is the disjunction of the type-D, type-E, and type-F views, and I certainly don’t favor the type-E view (epiphenomenalism) over the others. Unlike you, I don’t think there are any watertight arguments against it, but if you’re right that there are, then that just means that the conclusion of the argument should be narrowed to the other two views. Of course there’s a lot more to be said about these issues, and the project of finding good arguments against Z is a worthwhile one, but I think that such an argument requires more than you’ve given us here.

Thus, even if consciousness causes things, that’s just a description of what consciousness does. One could imagine a world where all the atoms move in the same way, as if they’re prompted by consciousness, but they aren’t caused by anything conscious. A subjective experience may do something causally, but you could imagine a physical law on interactionism that does exactly the same things consciousness does. Next Eliezer says

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie’s lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

Interactionism doesn’t hold that consciousness is not experimentally detectable—that’s not a necessary entailment of dualism. The zombie world on interactionism wouldn’t need an extra zombie master. Suppose that the psychophysical law in this world is that when you get a bunch of neurons together they become conscious and then their desires exert some force. Well, the zombie world would have the same forces exerted, just minus the mental state of desires.

Why would anyone bite a bullet that large? Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?

The reason is because consciousness is not merely causal. It does cause things, but there’s something it’s like to see red, over and above what it causes. Thus, it’s possible that you could take away that other stuff in theory, and still have a causal isomorph. The reason people postulate that consciousness is causally inert is because

A) there are problems incorporating its causal role into physics.

B) All one has to posit is that when a person has a particular desire, that corresponds with the physical effect. Epiphenomenalists argue that the simplest consciousness laws involve the physical state that is about to raise your arm causing consciousness, rather than the other way around.

Zombie-ists are property dualists—they don’t believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.

“Beyond the physical”? What does that mean? It means the extra properties are there, but they don’t influence the motion of the atoms, like the properties of electrical charge or mass. The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.

One can be an interactionist property dualist. Property dualism just requires saying that consciousness is a property of matter, not its own separate substance.

Once you’ve postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the “mysterious redness of red”?

Isn’t Descartes taking the simpler approach, here? The strictly simpler approach?

Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?

Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?

I am not endorsing Descartes’s view. But at least I can understand where Descartes is coming from. Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness. Fine.

I lean towards interactionist dualism, so I’m in agreement with Eliezer here. However, the claim that dualism is motivated by finding something that seems mysterious and then just positing mysterious stuff is totally wrong. Dualists don’t just give up on explanations—there are lots of ways that specific dualists have experimentally tested their theories.

There are lots of reasons to posit dualism of some sort, which I lay out here. The fundamental reason is that the laws of physics explain physics in terms of structure and function—yet none of that is able to explain the subjective experience of seeing red, for example. Subjective experience is neither structural nor functional, so the physics based account that explains it in terms of structure and function is wholly inadequate.

Chalmers critiques substance dualism on the grounds that it’s hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?

This is not accurate. For one, Chalmers is now pretty undecided between different versions of non-physicalism. Chalmers objects to substance dualism based on it violating causal closure of the physical, having trouble explaining how consciousness would interact, and plausibly being ruled out by physics.

Overall, I quite like Eliezer, as I said at the outset. However, it’s frustrating that when it comes to consciousness, he just seems very lost. This is particularly a problem given that consciousness is literally the most important thing in the universe—the only important thing in the universe. So it’s really, really, really important not to get things wrong, when it comes to consciousness.

Eliezer at one point says

That-which-we-name “consciousness” happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.

Yet within physics, the last 3000 times, we haven’t just posited the same old laws. Newton discovered brand new laws, so did Einstein. Consciousness is not more fundamentally mysterious—there are just some type of fundamental psychophysical laws that result in consciousness, which have a causal effect on the world.

Trying to explain it with the same old stuff, when we have lots of knock-down arguments against the ability of the old stuff to explain it—that’s an appeal to magic. Eliezer’s reductive account involves positing that when you have some physical things, they just produce experience, despite our inability to either

A) Understand how physics could go beyond explaining structure and function.

B) Provide any account of how brain stuff generates consciousness.

C) Provide a physical description of any type of conscious state.

All of the other accounts of successful reduction have involved explaining the behavior of things at a higher level, by appealing to lower level facts. But this just won’t work for consciousness! Consciousness isn’t about behavior. When we ask whether AI is conscious, we don’t care whether they say verbally that they’re conscious. What we care about is whether the ineffable what it’s like stuff is present in the AI.

This is a seriously important mistake for effective altruists not to make. We must not with away and ignore the fundamental difficulty of the hardest problem in the universe. Saying “it just emerges,” is not a good solution. And yet I fear that’s the solution of many of my fellow effective altruists and rationalists—a mistake that could be very costly.