If moral realism is simply the view that some positive moral claims are true, without further metaphysical or conceptual commitments, then I can’t see how it could be at odds with the orthogonality thesis. In itself, that view doesn’t entail anything about the relation between intelligence levels and goals.
On the other hand, the conjunction of moral realism, motivational judgment internalism (i.e. the view that moral judgments necessarily motivate), and the assumption that a sufficiently intelligent agent would grasp at least some moral truths is at odds with the orthogonality thesis. Other combinations of views may yield similar results.
By “modus ponens doesn’t necessarily motivate,” do you mean that someone could see that modus ponens applies yet not draw the inference? That seems correct, but I don’t see how this makes metaethical motivational judgment internalism (MMJI) uninteresting. Are you saying that MMJI is obviously false, and so vallinder’s point becomes uninteresting because nobody could possibly be that stupid as to come by this route to being at odds with the orthogonality thesis? It seems unlikely that’s your point … everyone who’s ever said “nobody could possibly be that stupid” has been wrong (and I’m out to prove it!) … so then I just don’t get it.
a form of judgment internalism, which holds that a necessary connection exists between sincere moral judgment and either justifying reasons or motives: necessarily, if an individual sincerely judges that she ought to φ, then she has a reason or motive to φ
I call it a metaethical thesis because its advocates usually consider it part of the meaning of ethical judgments.
See Motivational internalism/externalism
(you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).
YES! I think this is exactly right: moral realism is not at odds with the orthogonality thesis; but the conjunction of moral realism with moral internalism is.
And it is this conjunction that many people seem to believe, although I cannot see why, because I can’t even imagine what it would mean for the world to be such that it is true. So I find it obvious that the conjunction isn’t true. It’s not quite so clear which of the conjuncts is false (if not perhaps both), though.
This is more of a question about what qualifies as a moral judgment. It’s possible to make moral judgments (under one definition) from the outside about other systems of morality or other people’s utility functions, e.g. “According to Christianity, masturbation is a sin” doesn’t motivate you to stop masturbating unless you firmly believe in Christianity, and “According to Bob’s utility function, he should donate more to charity” needn’t motivate you to donate more to charity. On the other hand, it’s impossible to believe “According to my moral system, I should do X” and not think X is the right thing for you to do.
On the one hand, nothing is necessarily true about an arbitrary mind, because nothing is true about all minds, for the same reason that there are no universally compelling arguments.
On the other hand, this is just another disagreement about what words refer to: someone who says “moral judgments necessarily motivate” is just saying “a judgement that does not motivate, does not fit my definition of moral”. This is not a fact about the world or about morality, it’s a fact about the way that person uses the words “moral judgment”.
If there is indeed wide disagreement on the answer to this question—I write this before voting and haven’t seen the results yet—then that is yet another argument for tabooing the word “morality”.
In itself, that view doesn’t entail anything about the relation between intelligence levels and goals.
This is a bit of a tangent. But to someone like myself who thinks that moral realism is not just wrong but logically impossible—rather like other confused notions such as free will—the assumption of moral realism might lead anywhere. Just as you can prove anything from a false premise, so a moral realist who tries to decompartmentalize that belief and update on it could end up holding other false beliefs.
ETA: this is wrong, and thanks to vallinder for the correction. You can prove anything from a contradiction, but not necessarily from a false premise. However, it’s still bad for you to believe strongly in false things.
You can prove everything from a contradiction, but you can’t prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I’d hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.
You can prove everything from a contradiction, but you can’t prove everything from a false premise.
Correction accepted, thanks. (Will edit original comment.)
I take it that you mean that we can derive a contradiction from the assumption of moral realism.
I’m unsure about it now. I really did confuse contradictions and false beliefs.
I’d hesitate to call either moral realism or free will logically impossible
“Free will” means something different to everyone who talks about it. Some versions I’ve seen are definitely logically incoherent. Others are logically possible and are merely very complex theories with zero evidence for them that are retrofitted to formalize traditional human beliefs.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them. So it postulates some additional property of moral facts (truth) which isn’t observable by anyone, and so does no predictive work. And it necessarily has nothing to do with the moral claims that we (or any other minds) actually do believe, and the reasons we believe in them.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.
Do you believe there are no universally compelling arguments in maths, etc?
But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.
There seems to be something wrong with the argument in this sentence. There are no universally compelling arguments in mathematics and science either, yet we are capable of recognizing truth claims in those fields.
In science, we want to choose theories that are (among other things) predictive. Certainly, the preference for predicting the future—as opposed to being surprised by the future, or any number of other possible preferences—is arbitrary, in the sense that there exists minds that don’t endorse it. There is no universally compelling argument that will convince every possible mind to want to predict the future correctly. But given our desire to do so, our scientific theories necessarily follow.
Math is similar: there’s no UCA to use the axioms we do and not some others. But we choose our axioms to create mathematical structures that correspond to reality in some useful way (or to our thoughts, which are part of reality); and given our axioms, the rest of our mathematical theories follow.
In both cases, we choose and build our science and math due to our preexisting goals and the properties of our thought. It’s those goals that are really arbitrary in the sense of no UCA; but given those basic goals and properties, science and math can be derived.
Moral realism, on the other hand, claims (AFAICS) that there are objectively true morals out there, which one ought to follow. Whether they are compatible with one’s preconceived notions of morality, or goals, desires, beliefs, or anything else that is a property of the person holding moral beliefs, is irrelevant: they are true in and of themselves.
That means they should not be compared to “computability theory”. They should be compared to “the desire to correctly predict whether there can exist any physical machine that would solve this problem”. We can judge the objective truth of a scientific theory by how well it predicts things; but we can’t judge the objective truth of a purported moral-realistic statement, because the very definition of moral realism means its truth cannot be judged. It’s a kind of dualism, postulating an inherently undetectable property of “objective truth” to moral statements.
If moral realism is simply the view that some positive moral claims are true, without further metaphysical or conceptual commitments, then I can’t see how it could be at odds with the orthogonality thesis. In itself, that view doesn’t entail anything about the relation between intelligence levels and goals.
On the other hand, the conjunction of moral realism, motivational judgment internalism (i.e. the view that moral judgments necessarily motivate), and the assumption that a sufficiently intelligent agent would grasp at least some moral truths is at odds with the orthogonality thesis. Other combinations of views may yield similar results.
This - paragraph two sentence one—is the answer to the OP question, and I’m sad to see that it only has 4 points after my up-vote.
The sentence is question is either false or true and uninteresting depending on exactly what is meant by “necessarily motivate”.
For the most obvious interpretation, it is uninteresting since even modus ponens doesn’t necessarily motivate.
By “modus ponens doesn’t necessarily motivate,” do you mean that someone could see that modus ponens applies yet not draw the inference? That seems correct, but I don’t see how this makes metaethical motivational judgment internalism (MMJI) uninteresting. Are you saying that MMJI is obviously false, and so vallinder’s point becomes uninteresting because nobody could possibly be that stupid as to come by this route to being at odds with the orthogonality thesis? It seems unlikely that’s your point … everyone who’s ever said “nobody could possibly be that stupid” has been wrong (and I’m out to prove it!) … so then I just don’t get it.
Would you mind explaining what MMJI is?
SEP calls it
I call it a metaethical thesis because its advocates usually consider it part of the meaning of ethical judgments.
I think: Do ‘moral judgments necessarily motivate’? Is the key question here. [pollid:576]
See Motivational internalism/externalism (you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).
YES! I think this is exactly right: moral realism is not at odds with the orthogonality thesis; but the conjunction of moral realism with moral internalism is.
And it is this conjunction that many people seem to believe, although I cannot see why, because I can’t even imagine what it would mean for the world to be such that it is true. So I find it obvious that the conjunction isn’t true. It’s not quite so clear which of the conjuncts is false (if not perhaps both), though.
This is more of a question about what qualifies as a moral judgment. It’s possible to make moral judgments (under one definition) from the outside about other systems of morality or other people’s utility functions, e.g. “According to Christianity, masturbation is a sin” doesn’t motivate you to stop masturbating unless you firmly believe in Christianity, and “According to Bob’s utility function, he should donate more to charity” needn’t motivate you to donate more to charity. On the other hand, it’s impossible to believe “According to my moral system, I should do X” and not think X is the right thing for you to do.
On the one hand, nothing is necessarily true about an arbitrary mind, because nothing is true about all minds, for the same reason that there are no universally compelling arguments.
On the other hand, this is just another disagreement about what words refer to: someone who says “moral judgments necessarily motivate” is just saying “a judgement that does not motivate, does not fit my definition of moral”. This is not a fact about the world or about morality, it’s a fact about the way that person uses the words “moral judgment”.
If there is indeed wide disagreement on the answer to this question—I write this before voting and haven’t seen the results yet—then that is yet another argument for tabooing the word “morality”.
Is this a poll about whether moral judgements necessarily motivate, or whether that’s the key question?
Would people (especially those who haven’t read philosophical background) say what they think this question means. I suspect giant misinterpretation.
This is a bit of a tangent. But to someone like myself who thinks that moral realism is not just wrong but logically impossible—rather like other confused notions such as free will—the assumption of moral realism might lead anywhere. Just as you can prove anything from a false premise, so a moral realist who tries to decompartmentalize that belief and update on it could end up holding other false beliefs.
ETA: this is wrong, and thanks to vallinder for the correction. You can prove anything from a contradiction, but not necessarily from a false premise. However, it’s still bad for you to believe strongly in false things.
You can prove everything from a contradiction, but you can’t prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I’d hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.
Correction accepted, thanks. (Will edit original comment.)
I’m unsure about it now. I really did confuse contradictions and false beliefs.
“Free will” means something different to everyone who talks about it. Some versions I’ve seen are definitely logically incoherent. Others are logically possible and are merely very complex theories with zero evidence for them that are retrofitted to formalize traditional human beliefs.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them. So it postulates some additional property of moral facts (truth) which isn’t observable by anyone, and so does no predictive work. And it necessarily has nothing to do with the moral claims that we (or any other minds) actually do believe, and the reasons we believe in them.
Do you believe there are no universally compelling arguments in maths, etc?
Yes. With extremely high confidence, since it’s a logical argument, not an empirical fact.
There seems to be something wrong with the argument in this sentence. There are no universally compelling arguments in mathematics and science either, yet we are capable of recognizing truth claims in those fields.
That’s a good point and needs expanding on.
In science, we want to choose theories that are (among other things) predictive. Certainly, the preference for predicting the future—as opposed to being surprised by the future, or any number of other possible preferences—is arbitrary, in the sense that there exists minds that don’t endorse it. There is no universally compelling argument that will convince every possible mind to want to predict the future correctly. But given our desire to do so, our scientific theories necessarily follow.
Math is similar: there’s no UCA to use the axioms we do and not some others. But we choose our axioms to create mathematical structures that correspond to reality in some useful way (or to our thoughts, which are part of reality); and given our axioms, the rest of our mathematical theories follow.
In both cases, we choose and build our science and math due to our preexisting goals and the properties of our thought. It’s those goals that are really arbitrary in the sense of no UCA; but given those basic goals and properties, science and math can be derived.
Moral realism, on the other hand, claims (AFAICS) that there are objectively true morals out there, which one ought to follow. Whether they are compatible with one’s preconceived notions of morality, or goals, desires, beliefs, or anything else that is a property of the person holding moral beliefs, is irrelevant: they are true in and of themselves.
That means they should not be compared to “computability theory”. They should be compared to “the desire to correctly predict whether there can exist any physical machine that would solve this problem”. We can judge the objective truth of a scientific theory by how well it predicts things; but we can’t judge the objective truth of a purported moral-realistic statement, because the very definition of moral realism means its truth cannot be judged. It’s a kind of dualism, postulating an inherently undetectable property of “objective truth” to moral statements.