Communications lead at MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer’s.
Rob Bensinger
Thanks, drethelin!
How deep of an analysis do you want? Ultimately, what I mean is that torture tends to foreseeably decrease the net positive valence of all experience to a greater extent than does incapacitation.
We both know those are fuzzy terms. And as a utilitarian I acknowledge that some extremely minimal torture could in principle be more justifiable than an especially severe incapacitation. But everyday cases of what we call ‘torture’ are intuitively much more painful and dehumanizing than, say, permanently depriving a person of a firearms or automobile license. Do you think that one’s long-term ability to use magic would tend to cluster on the other side of torture, on a scale of resultant human suffering?
Descriptively, most ethical systems would, I think, agree with my assessment; so if ‘ethically justifiable’ just means ‘able to be justified under what various people take to be the right ethical principles,’ it is an empirical statement. But I’ll instead take the approach of stipulating what I mean by ethical justifiability in psychological terms, the felt positive and negative valence of experiences. If this is a real property of mental states, what I call ‘ethical justifiability’ will rest on the distribution of those states. I am responsible for how I use my words, but my words are not on that account ‘about me.’
Our ability to fruitfully debate this issue, while we remain in fiction, is probably very limited. It may be underdetermined whether losing one’s magic feels more like losing a driver’s license or like losing a limb. If I’m conceiving of magic loss more in the former terms (magic as a toolbox), you more in the latter terms (magic as an intimate part of the magician), then it’s unsurprising that we’ll arrive at different intuitions.
That said, I’m unclear on what your argument is for treating torture and incapacitation as a ‘continuum.’ I of course think they can be placed on a continuum of suffering; and I concede that their distribution over the continuum partly overlaps, though I think the bulk of torture involves more intense aggregate suffering than does the bulk of incapacitation. But you seem to be making a different claim now—that torture IS a kind of incapacitation, or that incapacitation is a kind of torture.
The latter claim I can understand, but reject; incapacitation can sometimes be used to torture someone, but it does not follow that incapacitation itself is always just watered-down torture, for the same reason that the existence of ‘Chinese water torture’ does not imply that drinking water is, in any interesting sense, on a continuum with torture.
The former claim, that torture is a kind of incapacitation, seems more paradoxical. Is the suggestion that inflicting involuntary pain on someone is nothing but depriving that person of a certain ability—the ability, presumably, to be happy during the torture, or the ability to not suffer flashbacks afterward? I’m not sure this is a useful reframing, though it is interesting.
The reasons to dislike acute torture and superpower incapacitation are the same only in the very reductive way in which any two bad things are, given a monistic meta-ethics, bad for ‘the same reason.’ Sexual assault and poor dinner etiquette, if (monistically) bad, are bad for ‘the same reason’ in some attenuated sense. But for practical purposes this is not very informative, and I was trying to be at least a little practical in comparing the costs of torture and incapacitation.
Likewise, superpower incapacitation can be worse than torture mostly in the sense that any two generic acts can be dustspecked. This falls out of quantitative sensitivity in ethics (especially consequentialist ethics) as a boring side-effect, just as reducibility of reasons falls out of monism as a boring side-effect. In both cases, it has no special relevance to the topic at hand, and noting these general features of utilitarian tradeoffs doesn’t prevent us from also noting that typical real-world torture tends to produce more net suffering than typical real-world superpower incapacitation. (To make magic loss a counterexample to this trend, one would need to better flesh out what one takes magic to be.)
There are multiple different semantic values for “morality,” so it’s an ambiguous term, and the intended sense will need to be stipulated. But in most modern discussions, “social rule” is not one of those values. For instance, the rules of English grammar and of dinner etiquette are social, but not moral. And English speakers recognize that violating a social rule can be morally permissible, or even morally obligatory.
Let’s tease apart what you mean in trying to distinguish “empirical” claims from “unempirical” ones. You think that “Windows sucks” is an empirical claim, while, say, “Madonna sucks” is not. What does this mean?
(1) It can’t mean that “Madonna sucks” is meaningless. We all understand the sentence perfectly well.
(2) It can’t mean that “Madonna sucks” fails to convey information about the world. Certainly it largely or entirely conveys information about the speaker’s preferences; but those preferences are themselves a part of the world. “I prefer not to listen to Justin Bieber’s music.” is an empirical claim, a worldly claim, one you can be right or wrong about, one with perfectly ordinary truth conditions; so certainly, if that is the meaning of “Justin Bieber sucks,” the latter sentence must be empirical too.
(3) Perhaps the idea is that “Windows sucks” conveys information ‘straightforwardly,’ while “Madonna sucks” only conveys information by implicature — we learn things aplenty when you assert it, but we don’t learn about what you literally asserted. But all assertions have implicatures, even paradigmatically empirical ones. And all assertions convey at least as much information about the beliefs and values of the asserter as they do about the thing asserted.
(4) It can’t mean that “Madonna sucks” isn’t making a claim. Something really is being asserted… grammatically, at least.
Perhaps it means that “Madonna sucks” does not correspond to a proposition? Intuitively, “Bob is in pain” and “Is Bob in pain?” and “Be in pain, Bob!” share a certain propositional content, . The interjection “ouch!” and the word “linoleum” and my hairstyle, on the other hand, seem to lack propositional content.
But it’s hard to see here how we could demonstrate that “Madonna sucks” is nonpropositional — it certainly seems to be asserting some fact, and if we claim to be radically mistaken in this case, it seems to put us in danger of falling into a radical skepticism about the propositional content of all our assertions.
What is being asserted? Well, at a minimum, “sucks” is being predicated of an object, “Madonna.” There is some entity such that it is the individual Madonna, and this individual sucks. Perhaps “sucks” is like “is sinful” or “is a witch,” and there is no real-world property that corresponds to it; but in that case it doesn’t follow that is not a proposition. It only follows that all propositions of the form , where “sucks” is used in the Madonna way and not the Windows way, are false propositions. The lack of a metaphysical basis for some term does not in itself force us to adopt a revisionary stance toward the term’s semantics.
(5) It can’t mean that the judgment “Madonna sucks” wasn’t arrived at as a result of weighing empirical data. The Madonna hater is performing the syllogism ‘All musicians who create music that I find routinely agonizing are bad; Madonna creates such music; therefore Madonna is bad.’ This badness is predicated because of the individual’s experiences.
(6) Similarly, it can’t mean that “Madonna sucks” is an incorrigible belief. New data could convince me that Madonna doesn’t suck after all — that she no longer sucks (because her new CD is excellent), or that she never sucked in the first place (because I mistook someone else’s music for hers, or because my music-evaluating faculties were impaired when I first listened to her).
So much for psychological incorrigibility. But perhaps the belief is ‘unfalsifiable,’ in some deeper sense? It’s not clear to me how. And this deprives us of the main criterion for distinguishing “Windows sucks” from “Madonna sucks;” for in both cases the sophisticated ethicist could argue that his/her truth-conditions for “x sucks” are straightforwardly empirical.
So we agree, at a minimum, that moral rules aren’t just ‘social rules.’ They may be a special kind of social rule. To figure that out, first explain to me: What makes a rule ‘social’? Is any rule made up by anyone at all, that pertains to interactions between people, a ‘social rule’? Or is a social rule a rule that’s employed by a whole social group? Or is it a rule that’s accepted as legitimate and binding upon a social group, by some relevant authority or consensus?
One of these characteristics is that people take them super seriously, even to the point of believing that they exist outside their heads, and don’t believe that they’re “just” social rules.
Most people don’t think that even frivolous, non-super-serious rules live inside their skulls. Baseball players don’t think baseball is magic, but they also don’t think the rules of baseball are neuronal states. (Whose skulls would the rules get to reside in? Is there a single ruleset spread across lots of brains, or does each brain have its own unique set of baseball rules?)
As for altruism, I share your preferences. So we can isolate the meta-ethical question from the normative one.
What do you mean by ‘subjective valuation concept’? Rationality is a ‘subjective valuation concept,’ in several senses; its metric is relativized to, established by, and finds much or all of its content in individual mental states, and it is an evaluative term whose applicability standards are likewise stipulated by a mixture of common language usage and personal preferences. What makes ‘X is rational’ more objective than ‘X sucks’?
Suppose you’re living in WW2-era Germany, and you learn of a law against helping gypsies. You see a gypsy in need, and come to the conclusion that you’re morally obliged to help that gypsy; but you shirk your felt obligation, and decide to stay out of trouble, even though it doesn’t ‘feel right.’ You consider the obligation to help gypsies a moral rule, and don’t consider the law against helping gypsies a moral rule. Moreover, you don’t think it would be a moral rule even if you agreed with or endorsed it; you’d just be morally depraved as a result.
Is there anything counter-intuitive about the situation I’ve described? If not, then it seriously problematizes the idea that morality is just ‘social + important,’ or ‘social + praised if good, punished if bad.’ The law is more important to me, or I’d not have prioritized it over my apparent moral obligation. And it’s certainly more important to the Powers That Be. And the relation of praise/punishment to good/bad seems to be reversed here. Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Their model of the world is identical, so what are they arguing about?
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Can you imagine them having a serious debate about what the fictional universe is “actually” like?
Yes, very easily. Fans of works of fiction do this all the time. (They also don’t generally conceptualize orcs and elves as brain processes inside their skulls, incidentally.)
I think it’s much more likely they would argue over what things should be like in order to make an interesting/cool universe than argue over object-level universe properties.
Maybe, but you’re assuming that the act of creation always feels like creation. In many cases, it doesn’t. The word ‘inspiration’ attests to the feeling of something outside yourself supplying you with the new ideas. Ancient mythologists probably felt this way about their creative act of inventing new stories about the gods; they weren’t all just bullshitting, some of them genuinely thought that the gods were communing with them via the process of invention. That’s an extreme case, but I think it’s on one end of a continuum of imaginative acts. Invention very frequently feels like discovery. (See, for instance, mathematics.)
I actually like your fictionalist model. I think it’s much more explanatory and general than trying to collapse a lot of disparate behaviors under ‘attitude claims;’ and it has the advantage that claims about fiction clearly aren’t empirical in some sense, whereas claims about attitude seem no less empirical than claims about muons or accordions.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Hm, they sure do to me
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.
What do you mean by ‘shouldn’t be done’? Do you mean it’s imprudent for an individual to spend that much money on a heart transplant, even though she values her own life?
Or do you mean it’s immoral for an individual to spend that much money on herself, rather than on greater utility for others?
Or do you mean it’s imprudent or immoral for medical practitioners and researchers to invest so much time and effort into performing heart transplants and gradually improving the technology? Or do you mean it’s imprudent or immoral for the state to fund such efforts?
Or do you mean it’s imprudent or immoral for the state to permit individuals to purchase heart transplants?
I would agree that the main problem is a lack of clear truth conditions for “x sucks;” the fact that it’s a claim about subjective states, and that it relies on implicature, is immaterial. But this is a problem to some extent for nearly all natural-language terms, including “x is rational” in the colloquial sense. And the problem can be resolved by stipulating truth-conditions for “x sucks” just as easily as for “x is rational.” So I think we’d agree that we should focus on getting people to taboo and clarify all their words, not just on feigning ‘objectivity’ by avoiding making any appeals to preferences or other mental states. Preferences are real.
Prizing equal rights obviously isn’t in tension with prizing diverse human exercise of those rights. You haven’t cited a contradiction. However, we could use your argument to spin off a real tension:
Similarity (e.g., our common humanity, our common interests and heritage and concerns) is valuable. But dissimilarity (e.g., cultural and individual diversity) is also valuable. So ‘value’ seems to be trivial.
Response: What we really value is not ‘being the same’ or ‘being different’ in a vacuum. What we value is (a) being similar or different in particular respects, and (b) having a certain ratio of similarity to difference. The English language just isn’t sophisticated enough to allow for easy slogans of either of those forms. We can’t easily signal that we value diversity, but in specific areas and not in all areas; likewise for valuing some similarities, but not all. And we can’t easily signal that we value a certain mixture of sameness and differentness, because too much of one or the other would make life less worth living. They seem like platitudes, but they aren’t false, and they’re worth taking seriously if only because they stand in for so many specific attributes that we need to take very seriously. It’s just important to see past the surface structure of some virtues.
Do you think you actually have such tendencies? ‘Dissociative identity disorder’ is a rebranding of ‘multiple personality disorder,’ which seems to some extent to be a sociohistorically constructed ailment—i.e., a real disease, but one whose nature and prevalence are strongly dependent on our cultural assumptions and folk-psychological models. Keeping that in mind, or becoming a Buddhist, might help dissolve some of the anxieties that naturally attend to noticing the disunities in one’s personality or persona. I can also recommend the book ‘Rewriting the Soul,’ by Ian Hacking.
A fifth alternative: Lord, Liar, Lunatic, Legend, or Just Plain Wrong. It’s amazing that the simplest explanations—that someone might simply be mistaken, that they might have sanely and honestly misinterpreted the data—gets so completely ignored and erased.
The question is whether it’s possible to simply be mistaken about having divine powers, without having an underlying mental disorder. And clearly the answer is ‘yes;’ and clearly this possibility has a higher prior probability than ‘Jesus is Lord.’ So neglecting the option is unconscionable, and is where the trilemma gets nearly all of its plausibility as an argument for Christianity.
Suppose a few really unlikely events happened, and caused everyone around you to think you were the messiah and/or divine. Would it be inconceivable, barring true insanity or deliberate deception, to come to think oneself the messiah and/or divine? Do you think that every psychic, every cult leader, is either (independently) insane or deliberately lying? It just ain’t so; self-deception is stronger than that.
The input is the claim ‘Race is a cultural convention.’ You output the interpretation: ‘None of the phenotypic variations associated with any racial schema are physically real; they are hallucinations or figments.’ Given how transparently ridiculous the assertion is, one must at least take a moment to pause and reconsider whether the anthropologists’ claim is really what you take it to be.
Perhaps what is being denied is not the existence of morphological variation between human populations, but rather the conceptualization of these differences under the traditional concept of Race, with its assumptions of discreteness and of other markers of cultural and bio-diversity strictly mapping on to a small set of physiognomic markers. Perhaps what is also being asserted is that the precise boundaries between races, and how large or small a ‘race’ gets to be, is culturally constructed and varies across different groups possessing ‘race’-like categories. Is it more likely that anthropologists are speaking somewhat loosely and infelicitously, or that they think the existence of darker and lighter skins in different parts of the world is a Grand Alien Conspiracy?
If you used to believe this yourself, then maybe you can explain to me what you mean(t) by ‘entirely a cultural artifact.’ Did you think that the people in question didn’t have different skin tones? That skin tone isn’t a genetic trait? That there was no correlation between a racial grouping and any phenotypic or genetic marker, like skin color? That genetic relatedness is confabulated in a grand game of make-believe?
“there’s no single characteristic which doesn’t fluctuate gradually across populations”—No, some traits have reached fixation in a population, or are totally absent. But I take your point. It’s still understandable that categories predating our modern, sophisticated notions of genetic variation would be controversial in their attempted modern reimaginings.
If you assign 0 to logical contradictions, you should assign 1 to the negations of logical contradictions. (Particularly since your confidence in bivalence and the power of negation is what allowed you to doubt the truth of the contradiction in the first place.) So it’s strange to say that you feel safer appealing to 0s than to 1s.
For my part, I have a hard time convincing myself that there’s simply no (epistemic) chance that Graham Priest is right. On the other hand, assigning any value but 1 to the sentence “All bachelors are bachelors” just seems perverse. It seems as though I could only get that sentence wrong if I misunderstand it. But what am I assigning a probability to, if not the truth of the sentence as I understand it?
Another way of saying this is that I feel queasy assigning a nonzero probability to “Not all bachelors are bachelors,” (i.e., ¬(p → p)) even though I think it probably makes some sense to entertain as a vanishingly small possibility “All bachelors are non-bachelors” (i.e., p → ¬p, all bachelors are contradictory objects).
That’s extremely strange and surprising, if true. Can you provide an example of this?
A permanent loss of magic is probably much more ethically justifiable than a temporary period of torture.