I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
wrong versions of a right idea
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
The important question is: will using theistic terminology help with clarity and understanding for the simulation argument?
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
Regardless, worship is not a defining characteristic of theism.
The SA gives us a reasonable structure within which to (re)-evaluate theism.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness:
1] Using imperfect tools sucks, but it’s better than no tools.
2] An honest, real-time insider view is going to be more accurate than our current best outside views.
3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation.
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I try not to rationalize.
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it’s better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.