You’re welcome, of course. Pearl’s book on causality is a great place to start. I also recommend Spirtes, Glymour, and Scheines Causation, Prediction, and Search. Depending on your technical level and your interests, you might find Woodward’s book Making Things Happen a better place to start. After that, there are many excellent papers, depending on your interests.
JonathanLivengood
Hullo Less Wrongers,
I am a philosopher working mostly on methodology and causal inference, though I also dabble in (new wave) experimental philosophy—not to be confused with the straight-up physics that went by that name from the days of Newton and Boyle until some time in the mid-nineteenth century. ;)
I just finished my PhD (in history and philosophy of science) and started as an assistant professor of philosophy at the University of Illinois in Urbana-Champaign on August 16th.
From time to time over the last two or three years, I’ve glanced at Less Wrong and found it engaging. I am a bit depressed at the pessimism often displayed with respect to contemporary philosophy, but part of that depression is the recognition that the critiques are pretty reasonable. Anyway, I thought I should officially sign on so that I can throw in my two cents and expose my thinking to severe—but, hopefully, courteous—testing.
Only 99%? That sounds low. ;)
I will submit (separately) three quotations from my favorite philosopher, C.S. Peirce:
Upon this first, and in one sense this sole, rule of reason, that in order to learn you must desire to learn, and in so desiring not be satisfied with what you already incline to think, there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy: Do not block the way of inquiry.
-- C.S. Peirce
It is the man of science, eager to have his every opinion regenerated, his every idea rationalized, by drinking at the fountain of fact, and devoting all the energies of his life to the cult of truth, not as he understands it, but as he does not yet understand it, that ought properly to be called a philosopher.
-- C.S. Peirce
The elements of every concept enter into logical thought at the gate of perception and make their exit at the gate of purposive action; and whatever cannot show its passports at both those two gates is to be arrested as unauthorized by reason.
-- C.S. Peirce
I was hoping somebody could make a coherent and plausible sounding argument for their position.
I’m not sure I’m up to the challenge, but here goes anyway …
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities—i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus—not just heuristics—in their probabilistic reasoning. If humans never gave the output demanded by Bayes’ theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes’ theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T’s research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean “bad at statistical reasoning in general, in an abstract setting” or do you mean “bad at statistical reasoning in whatever form it might take”? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.
In your linked piece, you were talking about formal epistemology. Here you say “formal philosophy.” Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)
Full disclosures below. *
I agree with much of Glymour’s manifesto, but I think the passage quoted would have been better left on the cutting-room floor. One reason is given in the critique you link: lots of philosophy gets grants and citations and employment in diverse areas around the academy and elsewhere. Not all of it gets noticed in science or furthers a scientific project, even broadly construed. For example, John Hawthorne just won a multi-million dollar grant to do work in epistemology of religion, and a couple of years ago, Alfred Mele won a multi-million dollar grant to do more work on free will. I doubt that Glymour thinks either of these projects has the virtues of the work of his CMU colleagues. But by the “grant-winning” standard, administrators should love this sort of philosophy. By a sales or readership standard, administrators ought to be encouraging more pop-culture and philosophy schlock.
Another reason is given by Glymour in the same manifesto:
A real use of philosophy departments is to provide shelter for such thinkers [who are, at least initially, outsiders to the science of the day, people who will take up questions that may have been made invisible to scientists because of disciplinary blinkers], and in the long run they may be the salvation of philosophy as an academic discipline.
So, a good use for philosophy departments is to shelter iconoclastic thinkers who are not going to be either understood or appreciated by contemporary scientists. How are such people going to be successful grant-winners? I can see how they might successfully publish within philosophy, given a certain let-every-flower-bloom attitude in philosophy. And I can see how some philosophers might end up convincing some scientists to take their work seriously enough to fund it … eventually. But surely, some of Glymour’s iconoclasts will be missed or ignored in the grant-giving process. Better, I think, to have some places for people to think whatever they want to think and be supported in that thinking so that they do not have to panic about meeting the basic necessities of life. If that means having to put up with literary criticism, then so be it.
Disclosures. I did my dissertation under Peter Spirtes, and I’ve taken many enjoyable classes with Clark Glymour. I think Clark is an excellent person, and he is one of my philosophical heroes, although I don’t think I do a very good job of emulating him.
Larger than logic? Hmm … maybe you’re thinking about “formal philosophy” in a way that I am unfamiliar with.
I would like to have some clarification on why you think personhood is what drives the immorality of killing. (At least, that was the impression I got from reading through the earlier thread.) More concretely, I would like to know what you think about the following sorts of consideration:
(1) The argument that what makes killing being X prima facie seriously morally wrong is that being X would have future experiences worth having were it not killed. Interestingly, this tracks in the opposite direction as your replacement calculation: younger beings are more valuable than older beings.
(2) The argument that destruction of any kind is prima facie morally wrong and that the wrongness tracks the complexity of the thing destroyed. One might have the view that the destruction of things like cats, computers, Rembrandt paintings, tables, and so forth requires some justification and that without justification, acts of destruction should be penalized, say by fines or imprisonment. I guess what I want here is some more precision about your “if done for some reason other than sadism” clause: what sorts of reasons, on which side does the law err if there is controversy about the goodness of the reasons, etc.
(3) I know that you are ducking giving an account of what makes something a person, but it would be very helpful if you at least sketch some of your thoughts. You said a few times something along the lines that you couldn’t come up with a definition of person that would make babies people, but that claim is a bit empty until we’ve seen some of your thinking.
Why not “optimality” and “optimalist”?
(I agree with jimrandomh’s criticisms of replacing “rational” with “optimal”: the replacement should not be done. But I have to confess to an initial, strongly positive reaction to the prospect of junking “rationalist,” since for me, as a philosopher, that word picks out Descartes, Spinoza, and Leibniz.)
I don’t know all of the ins and outs of the literature, but the basic problems here go back at least to Bentham and Mill, who had a dispute about kinds of pleasure and pain. Bentham took the view that all pains and pleasures were on the same footing. A human appreciating a work of art is no different from a pig appreciating a good roll in the mud. Mill took the view that pains and pleasures had more internal structure. Of course, for both Bentham and Mill, pain played a big part in the moral calculus. General concern about the moral standing of animals goes back a lot further: Descartes, for example, claimed that we have a moral certainty that animals have no souls—otherwise, we couldn’t eat them—but it’s not clear to me whether he connected this to pain.
More recently, the debate seems to be about the degree to which an analogical argument works that takes us from human pain to animal pain. See, for examples, an older article by Singer (excerpts only) and a newer article by Allen et al (pdf). But for most of these people, the issues are not theological.
If you can get to the conclusion that God exists regardless of the facts, then of course, you will be indifferent to the facts. That is, I think, the big danger in reasoning to a foregone conclusion.
Could you explain how you are calculating (or intuiting?) the relevant probabilities?
I feel like I am really missing something here. I don’t see how the modal argument is supposed to work. I have lots of evidence that I am conscious in this world. But how is that evidence supposed to help when I move to a different world—one in which I may or may not be a foobar?
At a first pass, I just don’t know how to parse the claims you are making. Are you saying, for example, that P(I am a foobar in this world) < P(A foobar is conscious in this world), or P(I am a foobar in some possible world) < P(A foobar is conscious in some possible world), or … ?
At a second pass, I’m not sure how to evaluate the probability of modal claims.
At a third pass, I’m worried that your argument equivocates on the interpretation of probability in your two assumptions. The first assumption—that P(I’m a foobar) > P(A foobar can be conscious) -- seems to use a modal relative frequency interpretation: where the probability of an event is the frequency of possible worlds in which the event occurs. The second assumption—that P(I’m conscious) is nearly one—seems to use an evidentialist or maybe personalist view of probability. But I don’t think these two can be combined unless you have some principle by which evidence that I am conscious in this world is also evidence that I am conscious in nearly every possible world.
Could you try explaining in more detail?
I read it the same way, I think. You say initially that you pocket 4k and 6k is given to charity. Later you talk as if you are pocketing the 6k. My guess is that the mistake is in the original description, since you later say that five lives are saved at $800 per life.
Two things.
First, it is worth having the distinction between what is permissible to do and what is obligatory to do. One might plausibly think that in this case, it is permissible to push the button but not obligatory to do so.
Second, an interesting follow-up question, I think, is the point at which people tip. If you are like me, then you might balk at pressing the button for such small dollar amounts, but also, if you are like me, you probably have a dollar value at which you would flip to thinking that pressing the button is permissible. I actually think that at some value, pressing the button becomes obligatory. But what are those values? Can rational agents with the same evidence disagree about them?
If you get an answer of “permissible but not obligatory”, then you aren’t finished; you’ve only concluded that it isn’t overwhelmingly slanted in one direction, but you still need a decision.
Why? In terms of algorithms, this might just be a place where you want to flip a coin. Or do you think that admissible decision procedures should always give the same answer to the same question? (If so, I’d love to know why you think that.)
But also, depending on whether you think rational agents with the same evidence can disagree about the button, you might think that “permissible-not-obligatory” is a worthwhile social category, even if you don’t think it ever obtains for an individual. That is, you might want a set of laws that allow such acts but do not punish people if they choose not to perform such acts.
I agree with a lot of the content—or at least the spirit—of the post, but I worry that there is some selectivity that makes philosophy come off worse than it actually is. Just to take one example that I know something about: Pearl is praised (rightly) for excellent work on causation, but very similar work developed at the same time by philosophers at Carnegie Mellon University, especially Peter Spirtes, Clark Glymour, and Richard Scheines, isn’t even mentioned.
Lots of other philosophers could be added to the list of people making interesting, useful contributions to causation research: Christopher Hitchcock at Caltech, James Woodward at Pitt HPS, John Norton at Pitt HPS, Frederick Eberhardt at WashU, Luke Glynn at Konstanz, David Danks at CMU, Ned Hall at Harvard, Jonathan Schaffer at Rutgers, Nancy Cartwright at the LSE, and many others (maybe even including my own humble self).
I am not trying to defend philosophy on the whole. I agree that we have some disease in philosophy that ought to be cut away. But I don’t think that philosophy is in as bad a shape as the post suggests. More importantly, there is a lot of good, interesting, useful work being done in philosophy, if you know where to look for it.