Bias and Naturalism: a Challenge

There are two theses which I think many LWers find attractive, but which on the face of it are at odds. This challenge is to find a way to reconcile them. Bluntly and a bit inaccurately:

  1. You can’t trust your untutored native cognitive endowment to make rational (or moral) judgements.

  2. All knowledge—including of what’s rational (moral)- is scientific. To learn what’s rational (moral) our only option is to study our native cognitive endowments.

In more ponderous detail:

Point 1) It’s taken for granted on LW -and I have no problem with this- that without effort to correct ourselves, humans systematically make irrational judgements. This was always obvious, but research of the last few decades, which this blog usefully advertises, exposes this quite starkly (Kahnemann and Tversky et. al.) I think it’s equally likely that the moral judgements of an average unreflective person who has not benefited from any moral education will likely fall short of what someone who has been morally educated would judge (moral education makes us less cruel).

Point 2) Some LW contributors subscribe to naturalism. One way of understanding this idea is that all knowledge is scientific knowledge -I mean, knowledge of facts about the measurable, natural world. In particular, whatever there may be to know about what is rational or moral can be known only through empirical investigation -specifically, investigation of the functioning of Homo sapien’s cognitive apparatus, and possibly facts about human evolution and ethology.

The Problem: Point (1) tells us that study of what people actually think and do will not tell us what’s rational or moral. Indeed, if we try to figure out, say, how to judge the probability of a heads on a toss of a fair coin given, say, 5 prior tails, merely studying what untutored people are apt to judge will give us a bum steer. But point (2) seems to tell us that’s all we are allowed. How do we augment mere cognitive science with other natural sciences, and without inadvertently simply smuggling in our values in the process, to deduce from naturalistic inquiry what’s rational or moral? (The point as it pertains specifically to morality is argued eloquently in this review (esp section 3)).

Here’s a fantasy to spell out the idea. Imagine that you had a highly accurate computer model of the human cognitive apparatus, and a sufficiently powerful computer to run a great number (1000′s? millions?) of instances simultaneously -and interacting as humans would- and that you could run a history of many 1000′s of years of such interacting instances, related and constrained as humans are, by the modelled equivalents of births and deaths and marriages and environments. How could such a model inform the question as to what’s rational or moral? Could one know that the system will reach some kind of equilibrium, say, and be justified in believing that this equilibrium state would represent rational and moral interactions? G.E. Moore famously argued that one cannot analyse being good purely in naturalistic terms, as it will always be coherent to ask of something possessing the natural properties whether it really is good.

And here’s another way to formulate what I think is the same point. It’s close to an ‘analytic’ truth that a belief is rational just in case you ought to hold it - ‘p is rational’ is just short hand for ‘you ought to believe p’. But what people in fact believe notriously differs from what we ought to. So,

  1. How do you identify a properly scientific filter to pull out all and only the rational beliefs from all others? and,

  2. Assuming you could devise an adequate such filter, how would you give a non-question-begging, properly scientific defence of the proposition that the class of beliefs identified are exactly those which one ought to believe?

(All this presupposes that beliefs anyway are naturalistically respectable entities -itself a doubtful assumption.)

No comments.