Doesn’t that still leave the problem of what the algorithm that produces moral judgements means by “moral”, “should”, etc?
To go back to the calculator analogy, suppose our calculator is sitting in a hailstorm and its buttons are being punched randomly as a result. It seems fair to say that the hailstorm doesn’t mean anything by “2”. If the algorithm that produces moral judgements is like the hailstorm, couldn’t we also say that moral judgements don’t really mean anything?
If you’re say that what you mean by “ought” is what the part of you that uses moral judgments means by “ought”, then I don’t understand why you choose to identify with that part of you and not with the part of you that produces moral judgements. EDIT: Doing so makes it easier to “solve the problem of meta-ethics” but you end up with a solution that doesn’t seem particularly interesting or useful. But maybe I’m wrong about that. Continued here.
Doesn’t that still leave the problem of what the algorithm that produces moral judgements means by “moral”, “should”, etc?
To go back to the calculator analogy, suppose our calculator is sitting in a hailstorm and its buttons are being punched randomly as a result. It seems fair to say that the hailstorm doesn’t mean anything by “2”. If the algorithm that produces moral judgements is like the hailstorm, couldn’t we also say that moral judgements don’t really mean anything?
If I am located at the center of a blurring, buzzing confusion, do statements like “I see a red circle” have no meaning?
If you’re say that what you mean by “ought” is what the part of you that uses moral judgments means by “ought”, then I don’t understand why you choose to identify with that part of you and not with the part of you that produces moral judgements. EDIT: Doing so makes it easier to “solve the problem of meta-ethics” but you end up with a solution that doesn’t seem particularly interesting or useful. But maybe I’m wrong about that. Continued here.