I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn’t help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
Mathematicians aren’t biased by being told “I colored 200 of 600 balls black” vs. “I colored all but 400 of 600 balls black”, because the question “how to color the most balls” has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.
If a moral theory can’t prove the correctness of an answer to a very simple problem—a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don’t have any distinguishing features) - then it probably doesn’t give clear answers to most other problems too, so what use is it?
If a moral theory can’t be proved correct in itself, what use is it? Given that theories are tested against intition, and that no theory has been shown to be completely satisfactory, it makes sense to use intuition directly.
Moral theories predict feelings, mathemathical theories predict different things. Moral philosophy assumes you already know genocide is wrong and it tries to figure out how your subconscious generates this feeling: http://lesswrong.com/lw/m8y/dissolving_philosophy/
Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?
This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren’t looking for this kind of theory, because if they were, they would agree much more by now: it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
No—the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.
it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?
If moral philosophers are affected by presentation bias, that means they aren’t reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)
If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?
By “rules” I meant what the parent comment referred to as trying to “algorithmize” moral feelings.
Moral philosophers are presumably trying to answer some class of questions. These may be “what is the morally right choice?” or “what moral choice do people actually make?” or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can’t accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.
These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.
A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn’t relevant.
In this case, the philosophers act as if the choice of phrasing “200 of 600 live” vs. “400 of 600 die” is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn’t be a consequentialist between 2 and 3 AM?
You haven’t shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can’t live up to their own cognitive standards in certain situations.
This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.
It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That’ll cover a lot of ‘… and therefore, the right answer emerges’. Not all, to be sure, but a fair amount.
I would assume that detecting the danger of the framing bias, such as “200 of 600 people will be saved” vs “400 of 600 people will die” is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.
Realize what’s occurring here, though. It’s not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What’s actually happening is that when philosophers are presented with the “save” formulation (but not the “die” formulation) they react differently than when they are presented with the “die” formulation (but not the “save” formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I’m perfectly aware of the error, I know I wouldn’t give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment.
Do you think that this is what utilitarianism is, or ought to be?
I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?
Do you think that this is what utilitarianism is, or ought to be?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don’t have anything close to that as of now. People disagree about what “utility” is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase “artificially created”, though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don’t think there’s any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I’m not sure that’s the right way to think about morality, especially if you’re a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say “I know this model doesn’t always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost”.
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases?
Something like this might be useful, but I’m not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I’m not aware of any moral philosopher proposing something along these lines, but I’m not extremely familiar with that literature. I do philosophy of science, not moral philosophy.
I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn’t help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
Mathematicians aren’t biased by being told “I colored 200 of 600 balls black” vs. “I colored all but 400 of 600 balls black”, because the question “how to color the most balls” has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.
If a moral theory can’t prove the correctness of an answer to a very simple problem—a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don’t have any distinguishing features) - then it probably doesn’t give clear answers to most other problems too, so what use is it?
If a moral theory can’t be proved correct in itself, what use is it? Given that theories are tested against intition, and that no theory has been shown to be completely satisfactory, it makes sense to use intuition directly.
Moral theories predict feelings, mathemathical theories predict different things. Moral philosophy assumes you already know genocide is wrong and it tries to figure out how your subconscious generates this feeling: http://lesswrong.com/lw/m8y/dissolving_philosophy/
Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?
This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren’t looking for this kind of theory, because if they were, they would agree much more by now: it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
No—the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.
It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?
If moral philosophers are affected by presentation bias, that means they aren’t reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)
If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?
By “rules” I meant what the parent comment referred to as trying to “algorithmize” moral feelings.
Moral philosophers are presumably trying to answer some class of questions. These may be “what is the morally right choice?” or “what moral choice do people actually make?” or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can’t accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.
These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.
A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn’t relevant.
In this case, the philosophers act as if the choice of phrasing “200 of 600 live” vs. “400 of 600 die” is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn’t be a consequentialist between 2 and 3 AM?
You haven’t shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can’t live up to their own cognitive standards in certain situations.
This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.
It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That’ll cover a lot of ‘… and therefore, the right answer emerges’. Not all, to be sure, but a fair amount.
No. This is what theories of moral psychology do. Philosophical ethicists do not consider themselves to be in the same business.
I would assume that detecting the danger of the framing bias, such as “200 of 600 people will be saved” vs “400 of 600 people will die” is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.
Being able to detect a bias and actually being able to circumvent it are two different skills.
Realize what’s occurring here, though. It’s not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What’s actually happening is that when philosophers are presented with the “save” formulation (but not the “die” formulation) they react differently than when they are presented with the “die” formulation (but not the “save” formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I’m perfectly aware of the error, I know I wouldn’t give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
Thanks, that makes sense.
Do you think that this is what utilitarianism is, or ought to be?
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don’t have anything close to that as of now. People disagree about what “utility” is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase “artificially created”, though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don’t think there’s any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I’m not sure that’s the right way to think about morality, especially if you’re a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say “I know this model doesn’t always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost”.
Something like this might be useful, but I’m not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I’m not aware of any moral philosopher proposing something along these lines, but I’m not extremely familiar with that literature. I do philosophy of science, not moral philosophy.