tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness:
1] Using imperfect tools sucks, but it’s better than no tools.
2] An honest, real-time insider view is going to be more accurate than our current best outside views.
3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation.
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it’s better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.