because we can miscalculate an “ought” or anything else.
One way to miscalculate an “ought” is the same way that we can miscalculate an “is”—e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren’t perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm—same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there’s no “blue” light frequency striking the optical nerve. But that sort of thing doesn’t seem as a very deep philosophical problem to me.
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.
One way to miscalculate an “ought” is the same way that we can miscalculate an “is”—e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren’t perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm—same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there’s no “blue” light frequency striking the optical nerve. But that sort of thing doesn’t seem as a very deep philosophical problem to me.
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.