Why exactly is using/uttering an ignorance prior better than “I don’t know?” The two convey exactly the same amount of information (“speaker has no data”). It seems to me that the only difference is that the former conveys additional worthless information in the form of an estimate of a probability that bears no necessary relationship to reality.
Paul_Gowder
Robin: would you say that the quantity of addictions—and addictions that make people genuinely, deeply unhappy—in the world is pretty good evidence that we in fact systematically tend to underestimate our self-control problems?
But (I call back) I already saved one child from the train tracks, and thus I am “unimaginably” far ahead on points. Whether I save the second child, or not, I will still be credited with an “unimaginably” good deed. Thus, I have no further motive to act. Doesn’t sound right, does it?
This isn’t a problem with the claim that a human life is of infinite value as such. It’s a problem with the claim that it’s morally appropriate to attach the concept of comparable value to human lives at all. It’s what happens when you start taking most utilitarians seriously. (For an overview of some of the creepy results you get when you start really applying utilitarianism, check out Derek Parfit’s Reasons and Persons.)
A Kantian would have an easy answer to your infinite moral points. That answer would go something like this: “there’s no such thing as moral points, and it’s not about how good you feel about yourself. There’s a duty to save the kid, so go do it.” And that position is based on the same intuition about the infinite value of persons as the one you quote, but instead of infinity as a mathematical concept, it’s about lexical priority: saving humans (or, generally, treating them as ends, etc.) is simply lexically ahead of all other values. And we can understand that as being what the Talmud really meant.
Eliezer: that’s a good point as far as it goes. But the answer many contemporary deontologists would give is that you can’t expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don’t provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary, which gives the agent enough room to make either decision (as long as s/he’s acting from moral motivations). Similarly, some decisions can be made by the agent’s consideration of their own character—what sort of a person am I? What are my values? Those sorts of things can often fill in the gaps in practical reason left by non-computational moral theories.
So I take it you don’t like Kierkegaard? Humph.
Seriously, though, I wonder to what extent it’s really possible to argue people out of religion. And I strongly suspect it’s close to zero.
Is the function of a post like this (and Dennett’s books on the subject, and everything Dawkins has done in the last N years, etc. etc.) less to persuade and more to—well—call it argument as attire? By hammering out yet another strong argument about the overwhelming dumbness of religion, you, and Dennett, and Dawkins (and sometimes I) self-identify as a member of the atheist-intellectual-sciencenerd tribe.
Eliezer, I think you’re mistaken on the facts—most theories take a lot of experimental anomaly before they get thrown out. For example, Kuhn, in Structure of Scientific Revolutions, for example (which I think is a much better work of history than philosophy, but anyway...), gives a marvelous description of “normal science” as just that—tinkering with the dominant paradigm, fitting new results into it bit by bit, etc.
I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period. If one must go further, it seems like the suffering from torture is qualitatively worse than the suffering from any number of dust specks.
Robin: dare I suggest that one area of relevant expertise is normative philosophy for-@#%(^^$-sake?!
It’s just painful—really, really, painful—to see dozens of comments filled with blinkered nonsense like “the contradiction between intuition and philosophical conclusion” when the alleged “philosophical conclusion” hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts! My model for the torture case is swiftly becoming fifty years of reading the comments to this post.
The “obviousness” of the dust mote answer to people like Robin, Eliezer, and many commenters depends on the following three claims:
a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,
b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)
c) it is a moral fact that we ought to select the world with more pleasure and less pain.
But each of those three claims is hotly, hotly contested. And almost nobody who has ever thought about the questions seriously believes all three. I expect there are a few (has anyone posed the three beliefs in that form to Peter Singer?), but, man, if you’re a Bayesian and you update your beliefs about those three claims based on the general opinions of people with expertise in the relevant area, well, you ain’t accepting all three. No way, no how.
Constant, my reference to your quote wasn’t aimed at you or your opinions, but rather at the sort of view which declares that the silly calculation is some kind of accepted or coherent moral theory. Sorry if it came off the other way.
Nick, good question. Who says that we have consistent and complete preference orderings? Certainly we don’t have them across people (consider social choice theory). Even to say that we have them within individual people is contestable. There’s a really interesting literature in philosophy, for example, on the incommensurability of goods. (The best introduction of which I’m aware consists in the essays in Ruth Chang, ed. 1997. Incommensurability, Incomparability, and Practical Reason Cambridge: Harvard University Press.)
That being said, it might be possible to have complete and consistent preference orderings with qualitative differences between kinds of pain, such that any amount of torture is worse than any amount of dust-speck-in-eye. And there are even utilitarian theories that incorporate that sort of difference. (See chapter 2 of John Stuart Mill’s Utilitarianism, where he argues that intellectual pleasures are qualitatively superior to more base kinds. Many indeed interpret that chapter to suggest that any amount of an intellectual pleasure outweighs any amount of drinking, sex, chocolate, etc.) Which just goes to show that even utilitarians might not find the torture choice “obvious,” if they deny b) like Mill.
Eliezer—I think the issues we’re getting into now require discussion that’s too involved to handle in the comments. Thus, I’ve composed my own post on this question. Would you please be so kind as to approve it?
Recovering irrationalist: I think the hopefully-forthcoming-post-of-my-own will constitute one kind of answer to your comment. One other might be that one can, in fact, prefer huge dust harassment to a little torture. Yet a third might be that we can’t aggregate the pain of dust harassment across people, so that there’s some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it’s not.
Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I’m not sure why moral judgments have to be transitive.
Next, I’d deny the second-to-last premise (for one thing, I don’t know what it means to be horribly tortured for the shortest period possible—part of the tortureness of torture is that it lasts a while).
I feel like I ought to make my ritual attempt to fly the deontology flag on this site by reference to the possibility of attaching do/don’t do evaluations directly to actions without reference to any outcome-evaluations at all.
Yet… the end of this post might actually be the most interesting argument I’ve heard in a while for the existence and permanence of what Rawls calls “the fact of reasonable pluralism”—Elizer offers us the useful notion that interconnections between our values are so computationally messy that there is just no way to reconcile them all and come to agreement on actual social positions without artifically constraining the decision-space.
Oy. I just glanced through the last couple weeks of posts. Hence the lack of a loud sigh on this one before. So consider this the loud-sigh of the confirmedly anti-koan, the person who thinks that metaphor and other such non-expository modes of speech have aesthetic value only, and that if one cannot speak of an idea in clear language, well, one ought to keep silent about it. (I can see Wittgenstein glaring at me...)
Or: what’s the point of rationalist koan* exactly?
It also irks me like crazy to see people taking the Japanese word “koan” and sticking an s on the end to pluralize it. You don’t do that in Japanese.
That was a really good post.
However: I suspect people don’t really mean “is this a cult” when they say “is this a cult.” And they don’t mean “please give me reassurance of my own rationality” either.
Rather—and I’m introspecting here, so these intuitions might not generalize—it seems like “is this a cult” means “is this a really tricky system of self-supporting irrational beliefs?” Or at least that the question “is this a cult” could mean that, if we interpreted it charitably.
If that’s correct, it’s not a question about the behavior of the people involved, nor about the presence or absence of certain kinds of biases (directly) but about the way the beliefs interact. For example, one belief that a lot of cults encourage is the belief that outsiders who deny the belief are trying to persecute the cult. That belief obviously lends strength to attempts by humans to hold all the other beliefs, just as the other beliefs (e.g. that the beliefs were given by revelation) lend strength to the attempt to hold the persecution belief.
Just a random speculation I’d like to toss out.
Oh Eliezer, why’d you have to toss that parenthetical in about priors? The rest of the post is so wonderful. But the priors thing… hell, for my part, the objection isn’t to priors that aren’t imposed by some Authority, it’s priors that are completely pulled out of one’s arse. Demanding something beyond the whim of some metaphorical marble bouncing about in one’s brain before one gets to make a probability statement is hardly the same as demanding capital-A-Authority.
Ian, your argument fails not merely because premise 1 isn’t established apodictically. (Which is the flaw of inductive reasoning generally, but which, as Eliezer tries to point out to the religious, doesn’t mean we don’t have good reason to believe it.)
It also fails because we have counterexamples up the wazoo. Michael’s point about sentient creatures is one of them. But we can generate a lot of others just by diddling around the space in which we define “objects.” Balls bounce and roll, bowling balls just roll, spherical objects generally do all sorts of crazy things. So the “spherical things” case is a counterexample too, just so far as you define the class of objects in such a way that spherical things count as objects.
You get a one-to-one mapping of object to function only by defining the objects on the functions, by picking as your object a uni-function (or few-function) idea like “ball.” So your argument is actually circular in a sense.
Poke, that’s a really unhelpful way of thinking about the problem of induction. The problem of induction is a problem of logic in the first instance—a description of the fact that we do have absolute knowledge of the truth of deductive arguments (conditional on the premises being true) but we don’t have absolute knowledge of the truth of inductive arguments. And that’s just because the conclusion of a deductive argument is (in some sense) contained in the premises, whereas the conclusion of a generalization isn’t contained in the individual observations. What’s contained in the individual observations (putting on social scientist hat here) is a probability, given one’s underlying distribution, of finding data like what you found if the world is a certain way.
That’s a real distinction—it doesn’t come from somehow giving weight to imaginary possibilities, it reflects the difference between logical truth (which IS absolute) and empirical truth (which is not).
We can go even stronger than mathematical truths. How about the following statement?
~(P &~P)
I think it’s safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true. And it’s the precondition for any other knowledge (for no other reason than if you deny it, you can prove anything). I mean, there are logics that permit contradictions, but then you’re in a space that’s completely alien to normal reasoning.
So that’s lots stronger than 2+2=4. You can reason without 2+2=4. Maybe not very well, but you can do it.
So Eliezer, do you have a probability of 1 in the law of non-contradiction?
If you get past that one, I’ll offer you another.
“There is some entity [even if only a simulation] that is having this thought.” Surely you have a probability of 1 in that. Or you’re going to have to answer to Descartes’s upload, yo.
Steve: Wasn’t that the claim of the sophists? “We’ll teach you how to win arguments so you can prevail in politics.” The problem is that the skills for winning arguments aren’t necessarily the schools for rationality in general. Probably the easiest way to learn the skills for winning arguments is to go to law school and “learn to think like a lawyer.”