Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?
Imagine I told Robin Hanson I liked the way chocolate tastes. Do you think he’d reply: “Yes, you probably think you like the taste of chocolate – but isn’t it more plausible that you mainly care about eating calorically dense foods so you can store up fat for the winter? Doesn’t that have a more plausible evolutionary origin than actually caring about the taste of chocolate?” Of course not, because that would sound silly. It’s only for abstract intellectual desires that someone can get away with a statement like that.
If evolution “wants” you to eat calorically dense foods it doesn’t make you actually want calories, it just makes you like the way the foods taste. And if evolution “wants” you to appear to care about truth to impress people the most efficient way for it to accomplish that is to make you actually care about the truth. That way you don’t have to keep your lies straight. People don’t think they care about the truth, they actually do.
I know that that’s Hanson’s quote, not yours, but the fact that you quote it indicates you agree with it to some extent.
This is like saying “if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous”. Evolution has a long history of faking signals when it can get away with it. If evolution “wants” you to signal that you care about the truth, it will do so by causing you to actually care about the truth if and only if causing you to actually care about the truth has a lower fitness cost than the array of other potential dishonest signals on offer.
Poisonousness doesn’t change appearance though. Being poisonous and looking poisonous are separate evolutionary developments. Truth seeking values, on the other hand, affect behavior as much as an impulse to fake truth seeking values, and fake truth seeking values are probably at least as difficult to implement, most likely more so, requiring the agent to model real truth seeking agents.
For one thing, if some people have actual truth-seeking values competing with people who have false truth-seeking values, the ones looking for actual truth have a good chance to find out about and punish the ones who are falsely seeking truth. This means fake truth-seekery needs to be significantly more efficient/less risky than actual truth seeking to be the expected result of a process that selects for appearances of truth seeking.
This is like saying “if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous”.
The only reason making some frogs look poisonous works is because there are already a lot of poisonous frogs around whose signal most definitely isn’t fake. Faking signals only works if there are lot of reliable signals in the environment to be confused with. So there must, at the very least, be a large amount of truth-seeking humans out there. And I think that a site like “Overcoming Bias” would self select for the truth-seeking kind among its readership.
I don’t know if any studies have been done with truth-seeking, but this is definitely the case with morality. The majority of humans have consciences, they care about morality as an end in itself to some extent. But some humans (called sociopaths) don’t care about morality at all, they’re just faking having conscience. However, sociopaths only make up about 1⁄25 of the population at most, their adaptation is only fit if there are a lot of moral humans around to trick with it.
I know that that’s Hanson’s quote, not yours, but the fact that you quote it indicates you agree with it to some extent.
A very fair assessment. (But I’ll tell you what I disagree with in Hanson’s view in a second.) I do think Hanson is correct that, in as far as evolutionary psychology has established anything, it has shown that evolution has provided for signaling and deception. If it enhances fitness for others to think you care about truth, evolution will tend to favor creating the impression to a greater extent than is warranted by the facts of your actual caring. See, for example, Robert Trivers recent book. Hanson maintains that the “far” mode (per construal-level theory—you can’t avoid taking that into consideration in evaluating Hanson’s position) evolved as a semi-separate mental compartment largely to accommodate signaling for purposes of status-seeking or ostentation. (The first part of my “Construal-level theory: Matching linguistic register to the case’s granularity” provides a succinct summary of construal-level theory without Hanson’s embellishments.)
I disagree with Hanson in his tendency to overlook the virtues of far thinking and overstate those of “near” thinking—his over-emphasis of signaling in the role of “far” thought. I disagree with Hanson in his neglect of the banefulness of moralism in general and religion in particular. Many of the virtues he paints religion as having reflect in-group solidarity and are bought at the expense of xenophobia and cultural extra-punitiveness. And also, as Eliezer Yudkowsky points out, falsehood has broader consequences that escape Hanson’s micro-vision; forgoing truth produces a general loss of intellectual vitality.
But more than anything, I reject on ethical grounds Hanson’s tacit position that what evolution cares about is what we should care about. If the ideal of truth was created for show, that shouldn’t stop us from using it as a lever to get ourselves to care more about actual truth. (To me, a rationalist is above all one who values, even hyper-values, actual truth.) To put it bluntly, I don’t see where Hanson can have further intellectual credibility after signaling that he doesn’t seek truth in his far beliefs: those being the beliefs he posts about.
I am very happy to see your clarification, you write pretty much nothing I disagree with. I think the only place where we might disagree is the mechanism by which evolution accomplishes its goal of providing for signalling and deception. I believe that evolution usually gives us desires for things like truth and altruism which are, on some mental level, completely genuine. It then gives us problems like akrasia, laziness, and self-deception, which are not under full control of our conscious minds, and which thwart us from achieving our lofty goals when doing so might harm our inclusive genetic fitness.
Therefore, I think that people are entirely truthful in stating the have high and lofty ideals like truth-seeking, they are just sabotaged by human weakness. I think someone who says “I am a truthseeker” is usually telling the truth, even if they spend more time playing Halo than they do reading non-fiction. To me saying someone doesn’t care very much about truthseeking because their behavior does not always seek the truth is like saying someone doesn’t care very much about happiness because they have clinical depression.
I cannot quite tell from your comments whether you hold the same views on this as I do or not, as you do not specify how natural selection causes people to signal and deceive.
Therefore, I think that people are entirely truthful in stating the have high and lofty ideals like truth-seeking, they are just sabotaged by human weakness. I think someone who says “I am a truthseeker” is usually telling the truth, even if they spend more time playing Halo than they do reading non-fiction.
From a construal-level-theory standpoint, we should be talking about people who value truth at an abstract-construal level (from “far”) but whose concrete-construal-level inclinations (“near”) don’t much involve truth seeking. Some people might be inclined to pursue truth both near and far but might be unable to effectively because of akrasia (which I think another line of research, ego-depletion theory, largely reduces to “decision fatigue”).
So, the first question is whether you think there’s a valid distinction to be made, such as I’ve drawn above. The second is, if you agree on the distinction, what could cause people to value truth from far but have little inclination to pursue it near. Consider the religious fundamentalist, who thinks he wants truth but tries to find it by studying the Bible. If this is an educated person, I think one can say this fundamentalist has only an abstract interest in truth. How he putatively pursues truth shows he’s really interested in something else.
The way evolution could produce signaling is by creating a far system serving signaling purposes. This isn’t an either-or question, in that even Hanson agrees that the far system serves purposes besides signaling. But he apparently thinks the other purposes are so meager that the far system can be sacrificed to signaling with relative impunity. The exact extent to which the far system evolved for signaling purposes is a question I don’t know the answer to. But where Hanson goes dangerous is in his contempt for the integrity of far thinking and his lack of interest in integrating it with near thinking, at least for the masses and even for himself.
A rationalist struggles to turn far thinking to rational purpose, regardless of its origins. Hanson is the paradox of an intellectual who thinks contemptuous far thoughts about far thinking.
It seems to follow from this line of reasoning that after evolving in a complex environment, I should expect to be constructed in such a way as to care about different things at different times in different contexts, and to consider what I care about at any given moment to be the thing I “really” care about, even if I can remember behaving in ways that are inconsistent with caring about it.
Which certainly seems consistent with my observations of myself.
It also seems to imply that statements like “I actually care about truth” are at best approximate averages, similar to “Americans like hamburgers.”
It seems to follow from this line of reasoning that after evolving in a complex environment, I should expect to be constructed in such a way as to care about different things at different times in different contexts, and to consider what I care about at any given moment to be the thing I “really” care about, even if I can remember behaving in ways that are inconsistent with caring about it.
Other possibilities:
Evolution could also make you simply care about lots and lots of different things and simply have them change in salience as per your situation. This seems to fit well with the concept of complexity of value.
Evolution could give you stable preferences and then give you akrasia so you screw them up if you end up in an environment where they are maladaptive.
Can you clarify how one might tell the difference between caring about different things at different times in different contexts, and caring about lots of different things that change in salience as per my situation? I agree with you that the latter is just as likely, but I also can’t imagine a way of telling the two apart, and I’m not entirely convinced that they aren’t just two labels for the same thing.
Similar things are true about having akrasia based on context vs. having how much I care about things change based on context.
I think that the fact that people exhibit prudence is evidence for caring about many things that change in salience. For instance, if I’m driving home from work and I think “I need groceries, but I’m really tired and don’t want to go to the grocery story,” there’s a good chance I″ll make myself go anyway. That’s because I know that even if my tiredness is far more salient now, I know that having food in my pantry will be salient in the future.
I suppose you could model prudence as caring about different things in different contexts, but you’d need to add that you nearly always care about ensuring a high future preference satisfaction state on top of whatever you’re caring about at the moment.
I’m not exactly sure I follow you here, but I certainly agree that we can care about more than one thing at a time (e.g., expectation of future food and expectation of future sleep) and weigh those competing preferences against one another.
Imagine I told Robin Hanson I liked the way chocolate tastes. Do you think he’d reply: “Yes, you probably think you like the taste of chocolate – but isn’t it more plausible that you mainly care about eating calorically dense foods so you can store up fat for the winter? Doesn’t that have a more plausible evolutionary origin than actually caring about the taste of chocolate?” Of course not, because that would sound silly. It’s only for abstract intellectual desires that someone can get away with a statement like that.
If evolution “wants” you to eat calorically dense foods it doesn’t make you actually want calories, it just makes you like the way the foods taste. And if evolution “wants” you to appear to care about truth to impress people the most efficient way for it to accomplish that is to make you actually care about the truth. That way you don’t have to keep your lies straight. People don’t think they care about the truth, they actually do.
I know that that’s Hanson’s quote, not yours, but the fact that you quote it indicates you agree with it to some extent.
This is like saying “if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous”. Evolution has a long history of faking signals when it can get away with it. If evolution “wants” you to signal that you care about the truth, it will do so by causing you to actually care about the truth if and only if causing you to actually care about the truth has a lower fitness cost than the array of other potential dishonest signals on offer.
Poisonousness doesn’t change appearance though. Being poisonous and looking poisonous are separate evolutionary developments. Truth seeking values, on the other hand, affect behavior as much as an impulse to fake truth seeking values, and fake truth seeking values are probably at least as difficult to implement, most likely more so, requiring the agent to model real truth seeking agents.
For one thing, if some people have actual truth-seeking values competing with people who have false truth-seeking values, the ones looking for actual truth have a good chance to find out about and punish the ones who are falsely seeking truth. This means fake truth-seekery needs to be significantly more efficient/less risky than actual truth seeking to be the expected result of a process that selects for appearances of truth seeking.
The only reason making some frogs look poisonous works is because there are already a lot of poisonous frogs around whose signal most definitely isn’t fake. Faking signals only works if there are lot of reliable signals in the environment to be confused with. So there must, at the very least, be a large amount of truth-seeking humans out there. And I think that a site like “Overcoming Bias” would self select for the truth-seeking kind among its readership.
I don’t know if any studies have been done with truth-seeking, but this is definitely the case with morality. The majority of humans have consciences, they care about morality as an end in itself to some extent. But some humans (called sociopaths) don’t care about morality at all, they’re just faking having conscience. However, sociopaths only make up about 1⁄25 of the population at most, their adaptation is only fit if there are a lot of moral humans around to trick with it.
A very fair assessment. (But I’ll tell you what I disagree with in Hanson’s view in a second.) I do think Hanson is correct that, in as far as evolutionary psychology has established anything, it has shown that evolution has provided for signaling and deception. If it enhances fitness for others to think you care about truth, evolution will tend to favor creating the impression to a greater extent than is warranted by the facts of your actual caring. See, for example, Robert Trivers recent book. Hanson maintains that the “far” mode (per construal-level theory—you can’t avoid taking that into consideration in evaluating Hanson’s position) evolved as a semi-separate mental compartment largely to accommodate signaling for purposes of status-seeking or ostentation. (The first part of my “Construal-level theory: Matching linguistic register to the case’s granularity” provides a succinct summary of construal-level theory without Hanson’s embellishments.)
I disagree with Hanson in his tendency to overlook the virtues of far thinking and overstate those of “near” thinking—his over-emphasis of signaling in the role of “far” thought. I disagree with Hanson in his neglect of the banefulness of moralism in general and religion in particular. Many of the virtues he paints religion as having reflect in-group solidarity and are bought at the expense of xenophobia and cultural extra-punitiveness. And also, as Eliezer Yudkowsky points out, falsehood has broader consequences that escape Hanson’s micro-vision; forgoing truth produces a general loss of intellectual vitality.
But more than anything, I reject on ethical grounds Hanson’s tacit position that what evolution cares about is what we should care about. If the ideal of truth was created for show, that shouldn’t stop us from using it as a lever to get ourselves to care more about actual truth. (To me, a rationalist is above all one who values, even hyper-values, actual truth.) To put it bluntly, I don’t see where Hanson can have further intellectual credibility after signaling that he doesn’t seek truth in his far beliefs: those being the beliefs he posts about.
I am very happy to see your clarification, you write pretty much nothing I disagree with. I think the only place where we might disagree is the mechanism by which evolution accomplishes its goal of providing for signalling and deception. I believe that evolution usually gives us desires for things like truth and altruism which are, on some mental level, completely genuine. It then gives us problems like akrasia, laziness, and self-deception, which are not under full control of our conscious minds, and which thwart us from achieving our lofty goals when doing so might harm our inclusive genetic fitness.
Therefore, I think that people are entirely truthful in stating the have high and lofty ideals like truth-seeking, they are just sabotaged by human weakness. I think someone who says “I am a truthseeker” is usually telling the truth, even if they spend more time playing Halo than they do reading non-fiction. To me saying someone doesn’t care very much about truthseeking because their behavior does not always seek the truth is like saying someone doesn’t care very much about happiness because they have clinical depression.
I cannot quite tell from your comments whether you hold the same views on this as I do or not, as you do not specify how natural selection causes people to signal and deceive.
From a construal-level-theory standpoint, we should be talking about people who value truth at an abstract-construal level (from “far”) but whose concrete-construal-level inclinations (“near”) don’t much involve truth seeking. Some people might be inclined to pursue truth both near and far but might be unable to effectively because of akrasia (which I think another line of research, ego-depletion theory, largely reduces to “decision fatigue”).
So, the first question is whether you think there’s a valid distinction to be made, such as I’ve drawn above. The second is, if you agree on the distinction, what could cause people to value truth from far but have little inclination to pursue it near. Consider the religious fundamentalist, who thinks he wants truth but tries to find it by studying the Bible. If this is an educated person, I think one can say this fundamentalist has only an abstract interest in truth. How he putatively pursues truth shows he’s really interested in something else.
The way evolution could produce signaling is by creating a far system serving signaling purposes. This isn’t an either-or question, in that even Hanson agrees that the far system serves purposes besides signaling. But he apparently thinks the other purposes are so meager that the far system can be sacrificed to signaling with relative impunity. The exact extent to which the far system evolved for signaling purposes is a question I don’t know the answer to. But where Hanson goes dangerous is in his contempt for the integrity of far thinking and his lack of interest in integrating it with near thinking, at least for the masses and even for himself.
A rationalist struggles to turn far thinking to rational purpose, regardless of its origins. Hanson is the paradox of an intellectual who thinks contemptuous far thoughts about far thinking.
It seems to follow from this line of reasoning that after evolving in a complex environment, I should expect to be constructed in such a way as to care about different things at different times in different contexts, and to consider what I care about at any given moment to be the thing I “really” care about, even if I can remember behaving in ways that are inconsistent with caring about it.
Which certainly seems consistent with my observations of myself.
It also seems to imply that statements like “I actually care about truth” are at best approximate averages, similar to “Americans like hamburgers.”
Other possibilities:
Evolution could also make you simply care about lots and lots of different things and simply have them change in salience as per your situation. This seems to fit well with the concept of complexity of value.
Evolution could give you stable preferences and then give you akrasia so you screw them up if you end up in an environment where they are maladaptive.
Some combination of these.
Can you clarify how one might tell the difference between caring about different things at different times in different contexts, and caring about lots of different things that change in salience as per my situation? I agree with you that the latter is just as likely, but I also can’t imagine a way of telling the two apart, and I’m not entirely convinced that they aren’t just two labels for the same thing.
Similar things are true about having akrasia based on context vs. having how much I care about things change based on context.
I think that the fact that people exhibit prudence is evidence for caring about many things that change in salience. For instance, if I’m driving home from work and I think “I need groceries, but I’m really tired and don’t want to go to the grocery story,” there’s a good chance I″ll make myself go anyway. That’s because I know that even if my tiredness is far more salient now, I know that having food in my pantry will be salient in the future.
I suppose you could model prudence as caring about different things in different contexts, but you’d need to add that you nearly always care about ensuring a high future preference satisfaction state on top of whatever you’re caring about at the moment.
I’m not exactly sure I follow you here, but I certainly agree that we can care about more than one thing at a time (e.g., expectation of future food and expectation of future sleep) and weigh those competing preferences against one another.