In itself, that view doesn’t entail anything about the relation between intelligence levels and goals.
This is a bit of a tangent. But to someone like myself who thinks that moral realism is not just wrong but logically impossible—rather like other confused notions such as free will—the assumption of moral realism might lead anywhere. Just as you can prove anything from a false premise, so a moral realist who tries to decompartmentalize that belief and update on it could end up holding other false beliefs.
ETA: this is wrong, and thanks to vallinder for the correction. You can prove anything from a contradiction, but not necessarily from a false premise. However, it’s still bad for you to believe strongly in false things.
You can prove everything from a contradiction, but you can’t prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I’d hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.
You can prove everything from a contradiction, but you can’t prove everything from a false premise.
Correction accepted, thanks. (Will edit original comment.)
I take it that you mean that we can derive a contradiction from the assumption of moral realism.
I’m unsure about it now. I really did confuse contradictions and false beliefs.
I’d hesitate to call either moral realism or free will logically impossible
“Free will” means something different to everyone who talks about it. Some versions I’ve seen are definitely logically incoherent. Others are logically possible and are merely very complex theories with zero evidence for them that are retrofitted to formalize traditional human beliefs.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them. So it postulates some additional property of moral facts (truth) which isn’t observable by anyone, and so does no predictive work. And it necessarily has nothing to do with the moral claims that we (or any other minds) actually do believe, and the reasons we believe in them.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.
Do you believe there are no universally compelling arguments in maths, etc?
But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.
There seems to be something wrong with the argument in this sentence. There are no universally compelling arguments in mathematics and science either, yet we are capable of recognizing truth claims in those fields.
In science, we want to choose theories that are (among other things) predictive. Certainly, the preference for predicting the future—as opposed to being surprised by the future, or any number of other possible preferences—is arbitrary, in the sense that there exists minds that don’t endorse it. There is no universally compelling argument that will convince every possible mind to want to predict the future correctly. But given our desire to do so, our scientific theories necessarily follow.
Math is similar: there’s no UCA to use the axioms we do and not some others. But we choose our axioms to create mathematical structures that correspond to reality in some useful way (or to our thoughts, which are part of reality); and given our axioms, the rest of our mathematical theories follow.
In both cases, we choose and build our science and math due to our preexisting goals and the properties of our thought. It’s those goals that are really arbitrary in the sense of no UCA; but given those basic goals and properties, science and math can be derived.
Moral realism, on the other hand, claims (AFAICS) that there are objectively true morals out there, which one ought to follow. Whether they are compatible with one’s preconceived notions of morality, or goals, desires, beliefs, or anything else that is a property of the person holding moral beliefs, is irrelevant: they are true in and of themselves.
That means they should not be compared to “computability theory”. They should be compared to “the desire to correctly predict whether there can exist any physical machine that would solve this problem”. We can judge the objective truth of a scientific theory by how well it predicts things; but we can’t judge the objective truth of a purported moral-realistic statement, because the very definition of moral realism means its truth cannot be judged. It’s a kind of dualism, postulating an inherently undetectable property of “objective truth” to moral statements.
This is a bit of a tangent. But to someone like myself who thinks that moral realism is not just wrong but logically impossible—rather like other confused notions such as free will—the assumption of moral realism might lead anywhere. Just as you can prove anything from a false premise, so a moral realist who tries to decompartmentalize that belief and update on it could end up holding other false beliefs.
ETA: this is wrong, and thanks to vallinder for the correction. You can prove anything from a contradiction, but not necessarily from a false premise. However, it’s still bad for you to believe strongly in false things.
You can prove everything from a contradiction, but you can’t prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I’d hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.
Correction accepted, thanks. (Will edit original comment.)
I’m unsure about it now. I really did confuse contradictions and false beliefs.
“Free will” means something different to everyone who talks about it. Some versions I’ve seen are definitely logically incoherent. Others are logically possible and are merely very complex theories with zero evidence for them that are retrofitted to formalize traditional human beliefs.
“Moral realism” is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don’t know—we can’t know—if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them. So it postulates some additional property of moral facts (truth) which isn’t observable by anyone, and so does no predictive work. And it necessarily has nothing to do with the moral claims that we (or any other minds) actually do believe, and the reasons we believe in them.
Do you believe there are no universally compelling arguments in maths, etc?
Yes. With extremely high confidence, since it’s a logical argument, not an empirical fact.
There seems to be something wrong with the argument in this sentence. There are no universally compelling arguments in mathematics and science either, yet we are capable of recognizing truth claims in those fields.
That’s a good point and needs expanding on.
In science, we want to choose theories that are (among other things) predictive. Certainly, the preference for predicting the future—as opposed to being surprised by the future, or any number of other possible preferences—is arbitrary, in the sense that there exists minds that don’t endorse it. There is no universally compelling argument that will convince every possible mind to want to predict the future correctly. But given our desire to do so, our scientific theories necessarily follow.
Math is similar: there’s no UCA to use the axioms we do and not some others. But we choose our axioms to create mathematical structures that correspond to reality in some useful way (or to our thoughts, which are part of reality); and given our axioms, the rest of our mathematical theories follow.
In both cases, we choose and build our science and math due to our preexisting goals and the properties of our thought. It’s those goals that are really arbitrary in the sense of no UCA; but given those basic goals and properties, science and math can be derived.
Moral realism, on the other hand, claims (AFAICS) that there are objectively true morals out there, which one ought to follow. Whether they are compatible with one’s preconceived notions of morality, or goals, desires, beliefs, or anything else that is a property of the person holding moral beliefs, is irrelevant: they are true in and of themselves.
That means they should not be compared to “computability theory”. They should be compared to “the desire to correctly predict whether there can exist any physical machine that would solve this problem”. We can judge the objective truth of a scientific theory by how well it predicts things; but we can’t judge the objective truth of a purported moral-realistic statement, because the very definition of moral realism means its truth cannot be judged. It’s a kind of dualism, postulating an inherently undetectable property of “objective truth” to moral statements.