When I said I’m skeptical that Pluralistic Moral Reductionism is on the right track, I meant that I’m skeptical that it is correct when it take positions like:
But whatever our intended meaning of ‘ought’ is, the same reasoning applies. Either our intended meaning of ‘ought’ refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn’t (in which case it fails to refer).
and
It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.)
and that we should proceed to try to solve metaethics on the basis of assuming these are correct. (Perhaps if you believe that these positions are correct, then many metaethical debates are dissolved in your mind, but that’s not the case for me, and I think you’re probably being too confident if you do consider those debates to be “dissolved”.)
And thus, in the spirit of exploring multiple approaches simultaneously, I’m trying to make sure we all have a cursory understanding of the neuroscience of human values.
Ok, I certainly have no objection to that. Except this relatively minor nitpick: since most of the studies are based on non-sentient primates, and the post talked little about what is unique to human values (e.g., influence of culture and deliberative thinking) I think it would be more accurate to refer to it as the neuroscience of primate values.
I think it would be more accurate to refer to it as the neuroscience of primate values
Sure. We do know a fair bit about specifically human values, too, but I haven’t written much about that yet.
I’m curious to know why it is that you disagree with what I’ll call Claim 1:
But whatever our intended meaning of ‘ought’ is, the same reasoning applies. Either our intended meaning of ‘ought’ refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn’t (in which case it fails to refer).
...and what you think is wrong about Claim 2:
It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.)
My objections stem mainly from the feeling that when we use moral terms, we may be referring to a shared concept of “normativity”, which is also referred to in sentences like:
What is the correct decision theory?
What is the right prior?
What is the right way to handle logical uncertainty?
This may well not be the case, but it is a possibility that I’m not willing to rule out, at least until we better understand what “right” in these sentences mean, and why they are not referring to the same thing as “right” in morality. (Of course there’s also the possibility that there are different kinds of normativity that are related in some ways but not identical.)
I’m curious to know why it is that you disagree with what I’ll call Claim 1:
But whatever our intended meaning of ‘ought’ is, the same reasoning applies. Either our intended meaning of ‘ought’ refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn’t (in which case it fails to refer).
I disagree with your approach of assuming linguistic reductionism. It seems to me that we ought to figure out the intended meanings of each possible word/phrase/sentence, and then conclude that reductionism is true if all language either refers to math and physics, or is clearly meaningless (and we can understand why we thought they had meaning). Assuming reductionism first and then searching for meaning of a word only within math and physics seems to be backwards.
Again, if we’re talking about “multiple approaches”, I have no objection if you think math and physics are the most promising places to look for meaning of moral terms, but I do not view that as a “clear and stable platform”.
...and what you think is wrong about Claim 2:
It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.)
I guess one difference between us is that I don’t see anything particularly ‘wrong’ with using stipulative definitions as long as you’re aware that they don’t match the intended meaning (that we don’t have access to yet), whereas you like to characterize stipulative definitions as ‘wrong’ when they don’t match the intended meaning.
So when you say “no One True Theory of Morality” I guess you mean under the stipulative definitions of ‘morality’. But when people argue over realism vs anti-realism, they are not arguing over whether people sometimes stipulate different definitions for “morality”, but instead are disagreeing over the nature of the intended meaning of ‘morality’. When you stipulate “anti-realism” to mean “people sometimes stipulate different definitions for “morality” I think I am justified in calling that “wrong”, because you’ve transformed the question into something that has a clear answer but which nobody is particularly interested in. I don’t think you’ve succeeded in dissolving the question that people are really asking.
when we use moral terms, we may be referring to a shared concept of “normativity”… This may well not be the case, but it is a possibility that I’m not willing to rule out...
Agreed.
I disagree with your approach of assuming linguistic reductionism.
Well, but I don’t ‘assume linguistic reductionism’. What I say is that if the intended meaning of ‘ought’ refers to structures in math and physics, then linguistic reductionism about normative language is correct, and if it doesn’t, then normative language (using its intended meaning) fails to refer (assuming ontological reductionism is true).
But when people argue over realism vs anti-realism, they are not arguing over whether people sometimes stipulate different definitions for “morality”, but instead are disagreeing over the nature of the intended meaning of ‘morality’.
Philosophers usually are, but not always. One thing I’m trying to avoid here is the ‘sneaking in connotations’ business performed by, in my example, Bill Craig.
I don’t think you’ve succeeded in dissolving the question that people are really asking.
No, I haven’t, and I’ve tried to be clear about that. But perhaps I need to edit ‘Pluralistic Moral Reductionism’ with additional clarifications, if it still sounds like I think I’ve dissolved the question that people are really asking. What I’ve dissolved is some debates that I see some people engaged in.
Edit: Also, I should add that I’m fairly skeptical of the idea that humans share a concept of morality or normativity. I do intend to write something up on the psychology and neuroscience of mental representations and ‘intuitive concepts’ to explain why, but I’ve got several other projects stacked up with priority over that.
What would it mean to share a concept of morality or normativity, or more generally, any concept? If I think of gold as “atomic number 79” and my Aunt Joan thinks of it as “the shiny yellow heavy valuable stuff in certain pieces of jewelry” do we fail to share a concept of gold? If such divergence counts as failure to share the concept, would failure to share concepts of morality be important to metaethics? (On this last question I’m thinking: not so much.)
Yeah, I’m not sure exactly what Wei Dai and Vladimir Nesov have in mind when they talk about a shared concept of ‘ought’ or of ‘right’. Will Sawin talks about humans having a cognitive module devoted to the processing of ‘ought’, which I also find implausible given the last 30 years of psychology and neuroscience. I think I have a different view (than Dai, Nesov, and Sawin) of what concepts are and how they are likely to work, but I’d have to put serious time into a post to explain this clearly, I think. For the moment, those who are interested in the subject should read the SEP articles on concepts and mental representation.
I disagree with your approach of assuming linguistic reductionism. It seems to me that we ought to figure out the intended meanings of each possible word/phrase/sentence, and then conclude that reductionism is true if all language either refers to math and physics, or is clearly meaningless (and we can understand why we thought they had meaning).
I also have doubts about Luke’s linguistic approach, but not on account of reductionism. Reductionism is working well enough elsewhere that it should be the hypothesis of first resort here. In contrast to what you write, I doubt the relevance of intended meanings at all. I prefer an attempt to capture the referents, taking a page from successful scientific reductions.
When Mendel discovered his laws of inheritance, he spoke of heredity “factors”. Centuries later Crick and Watson and other scientists made discoveries and hypotheses that amounted, roughly, to the claim that Mendel’s factors are sequences of amino acids in DNA molecules. Nobody needed to re-read Mendel’s work or examine his cultural context to determine his “original intent”. Rather, they posited the equivalence and found that it made good sense of Mendel’s factors and the laws in which they were invoked.
Or take ball lightning. The phenomenon is rare and unpredictable, and it may be questionable whether there is really anything to answer to that name. To answer the question of what, if anything, ball lightning is, let’s not get various stipulative definitions from various people. Instead, let’s try some hypotheses on for size: let’s generate some buoyant plasma formations, or some obstructed aerodynamic vortices. Let’s see if the vast majority of reports of ball lightning can be explained by one or more of these phenomena. If so, we have discovered what ball lightning is, and linguistic stipulations are beside the point.
Of course, there’s no guarantee such an approach can work. But there’s no guarantee that stipulative definitions will get us anywhere, either. Stipulation tempts definers to pretend to greater access to their conceptual structures than they actually possess. If they resist that temptation, they will probably resist stipulation too, for lack of use.
When I said I’m skeptical that Pluralistic Moral Reductionism is on the right track, I meant that I’m skeptical that it is correct when it take positions like:
and
and that we should proceed to try to solve metaethics on the basis of assuming these are correct. (Perhaps if you believe that these positions are correct, then many metaethical debates are dissolved in your mind, but that’s not the case for me, and I think you’re probably being too confident if you do consider those debates to be “dissolved”.)
Ok, I certainly have no objection to that. Except this relatively minor nitpick: since most of the studies are based on non-sentient primates, and the post talked little about what is unique to human values (e.g., influence of culture and deliberative thinking) I think it would be more accurate to refer to it as the neuroscience of primate values.
Sure. We do know a fair bit about specifically human values, too, but I haven’t written much about that yet.
I’m curious to know why it is that you disagree with what I’ll call Claim 1:
...and what you think is wrong about Claim 2:
Do you have the same objections as Vladimir?
My objections stem mainly from the feeling that when we use moral terms, we may be referring to a shared concept of “normativity”, which is also referred to in sentences like:
What is the correct decision theory?
What is the right prior?
What is the right way to handle logical uncertainty?
This may well not be the case, but it is a possibility that I’m not willing to rule out, at least until we better understand what “right” in these sentences mean, and why they are not referring to the same thing as “right” in morality. (Of course there’s also the possibility that there are different kinds of normativity that are related in some ways but not identical.)
I disagree with your approach of assuming linguistic reductionism. It seems to me that we ought to figure out the intended meanings of each possible word/phrase/sentence, and then conclude that reductionism is true if all language either refers to math and physics, or is clearly meaningless (and we can understand why we thought they had meaning). Assuming reductionism first and then searching for meaning of a word only within math and physics seems to be backwards.
Again, if we’re talking about “multiple approaches”, I have no objection if you think math and physics are the most promising places to look for meaning of moral terms, but I do not view that as a “clear and stable platform”.
In a previous discussion, you wrote:
So when you say “no One True Theory of Morality” I guess you mean under the stipulative definitions of ‘morality’. But when people argue over realism vs anti-realism, they are not arguing over whether people sometimes stipulate different definitions for “morality”, but instead are disagreeing over the nature of the intended meaning of ‘morality’. When you stipulate “anti-realism” to mean “people sometimes stipulate different definitions for “morality” I think I am justified in calling that “wrong”, because you’ve transformed the question into something that has a clear answer but which nobody is particularly interested in. I don’t think you’ve succeeded in dissolving the question that people are really asking.
Agreed.
Well, but I don’t ‘assume linguistic reductionism’. What I say is that if the intended meaning of ‘ought’ refers to structures in math and physics, then linguistic reductionism about normative language is correct, and if it doesn’t, then normative language (using its intended meaning) fails to refer (assuming ontological reductionism is true).
Philosophers usually are, but not always. One thing I’m trying to avoid here is the ‘sneaking in connotations’ business performed by, in my example, Bill Craig.
No, I haven’t, and I’ve tried to be clear about that. But perhaps I need to edit ‘Pluralistic Moral Reductionism’ with additional clarifications, if it still sounds like I think I’ve dissolved the question that people are really asking. What I’ve dissolved is some debates that I see some people engaged in.
Edit: Also, I should add that I’m fairly skeptical of the idea that humans share a concept of morality or normativity. I do intend to write something up on the psychology and neuroscience of mental representations and ‘intuitive concepts’ to explain why, but I’ve got several other projects stacked up with priority over that.
What would it mean to share a concept of morality or normativity, or more generally, any concept? If I think of gold as “atomic number 79” and my Aunt Joan thinks of it as “the shiny yellow heavy valuable stuff in certain pieces of jewelry” do we fail to share a concept of gold? If such divergence counts as failure to share the concept, would failure to share concepts of morality be important to metaethics? (On this last question I’m thinking: not so much.)
Yeah, I’m not sure exactly what Wei Dai and Vladimir Nesov have in mind when they talk about a shared concept of ‘ought’ or of ‘right’. Will Sawin talks about humans having a cognitive module devoted to the processing of ‘ought’, which I also find implausible given the last 30 years of psychology and neuroscience. I think I have a different view (than Dai, Nesov, and Sawin) of what concepts are and how they are likely to work, but I’d have to put serious time into a post to explain this clearly, I think. For the moment, those who are interested in the subject should read the SEP articles on concepts and mental representation.
Oh, even better:
Mareschal, Quinn, & Lea, eds. (2010). The Making of Human Concepts.
Mahon & Caramazza (2009). Concepts and categories: A neuropsychological perspective.
Kourtzi & Connor (2011). Neural Representations for Object Perception, Structure, Category, and Adaptive Coding.
Nieder & Dehaene (2009). Representation of number in the brain.
I also have doubts about Luke’s linguistic approach, but not on account of reductionism. Reductionism is working well enough elsewhere that it should be the hypothesis of first resort here. In contrast to what you write, I doubt the relevance of intended meanings at all. I prefer an attempt to capture the referents, taking a page from successful scientific reductions.
When Mendel discovered his laws of inheritance, he spoke of heredity “factors”. Centuries later Crick and Watson and other scientists made discoveries and hypotheses that amounted, roughly, to the claim that Mendel’s factors are sequences of amino acids in DNA molecules. Nobody needed to re-read Mendel’s work or examine his cultural context to determine his “original intent”. Rather, they posited the equivalence and found that it made good sense of Mendel’s factors and the laws in which they were invoked.
Or take ball lightning. The phenomenon is rare and unpredictable, and it may be questionable whether there is really anything to answer to that name. To answer the question of what, if anything, ball lightning is, let’s not get various stipulative definitions from various people. Instead, let’s try some hypotheses on for size: let’s generate some buoyant plasma formations, or some obstructed aerodynamic vortices. Let’s see if the vast majority of reports of ball lightning can be explained by one or more of these phenomena. If so, we have discovered what ball lightning is, and linguistic stipulations are beside the point.
Of course, there’s no guarantee such an approach can work. But there’s no guarantee that stipulative definitions will get us anywhere, either. Stipulation tempts definers to pretend to greater access to their conceptual structures than they actually possess. If they resist that temptation, they will probably resist stipulation too, for lack of use.
Just came across this, might be relevant / of interest: Similarities Between Macaque and Human Brains