“Gods are ontologically distinct from creatures, or they’re not worth the paper they’re written on.”—Damien Broderick
If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!
There’s also no hint of worship which everyone else on the planet thinks is a key part of the definition of a religion; if you believe that Cthulhu exists but not Jehovah, and you hate and fear Cthulhu and don’t engage in any Elder Rituals, you may be superstitious but you’re not yet religious.
This is mere distortion of both the common informal use and advanced formal definitions of the word “atheism”, which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.
A Simulator would be ontologically distinct from creatures like us—for any definition of ontologically distinct I can imagine wanting use. The Simulation Hypothesis is a metaphysical hypothesis in the most literal sense- it’s a hypothesis about what our physical universe really is, beyond the wave function.
Yeah, Will’s theism in this post isn’t the theism of believers, priests or academic theologians. And with certain audiences confusion would likely result and so this language should be avoided with those audiences. But I think we’re somewhat more sophisticated than that- and if there are reasons to use theistic vocabulary then I don’t see why we shouldn’t. I’m assuming Will has these reasons, of course.
Keep in mind, the divine hasn’t always been supernatural. Greek gods were part of natural explanations of phenomena, Aristotle’s god was just there to provide a causal stopping place, Hobbes’s god was physical, etc. We don’t have to cow-tow to the usage of present religious authorities. God has always been a flexible word, there is no particular reason to take modern science to be falsifying God instead of telling us what a god, if one exists, must be like.
I feel like we lose out on interesting discussions here where someone says something that pattern matches to something an evangelical apologist might say. It’s like we’re all of a sudden worried about losing a debate with a Christian instead of entertaining and discussing interesting ideas. We’re among friends here, we don’t need to worry about how we frame a discussion so much.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It’s time we discussed this rationally.
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
My views on “reclaiming” theism are summed up by ata’s previous comment:
I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let’s move on and not get attached to old terminology. Rehabilitating the idea of “theism” to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
Have you read Less Wrong’s metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.
Sean Carroll, on the other hand, gets absolutely everything wrong.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
But there’s no reason why we can’t be judgmental and firm in our personal convictions, even if we are honest that those convictions don’t have the same status as objective laws of nature.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
To be accurate Harris should have inserted the word “Instrumental” before “Values” in his book’s title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
[...]
and this one of Eliezer’s:
[...]
seem to express the same sentiment, to me.
The difference is huge. Eliezer and I do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
The difference is huge. Eliezer (and I) do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
The quantum numbers which an electron possesses are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
If you identify rightness with this huge computational property, then moral judgments are subjunctively objective (like math), subjectively objective (like probability), and capable of being true (like counterfactuals).
but he later says
Finally I realized that there was no foundation but humanity—no evidence pointing to even a reasonable doubt that there was anything else—and indeed I shouldn’t even want to hope for anything else—and indeed would have no moral cause to follow the dictates of a light in the sky, even if I found one.
That’s what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what’s morally right based on terminal values, but we can’t find terminal values that are objectively right in that they exist whether or not we do.
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.
The quantum numbers which define an electron are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
I’m saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.
Do you believe it is true that “For every natural number x, x = x”? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to “For every natural number x, x != x”?
Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don’t want to. Sorry.
That terminal values are like axioms, not like theorems. That is, they’re the things without which you cannot actually ask the question, “Is this true?”
You can say or write the words “Is”, “this”, and “true” without having axioms related to that question somewhere in your mind, of course, but you can’t mean anything coherent by the sentence. Someone who asks, “Why terminal value A rather than terminal value B?” and expects (or gives) an answer other than “Because of terminal value A, obviously!”* is confused.
*That’s assuming that A really is a terminal value of the person’s moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
wrong versions of a right idea
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
The SA so far stands to show that the central belief of broad theism is basically correct.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
The important question is: will using theistic terminology help with clarity and understanding for the simulation argument?
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
Regardless, worship is not a defining characteristic of theism.
The SA gives us a reasonable structure within which to (re)-evaluate theism.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness:
1] Using imperfect tools sucks, but it’s better than no tools.
2] An honest, real-time insider view is going to be more accurate than our current best outside views.
3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation.
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will’s position were pretty persuasive in this thread. The OP’s score went up from where it was when I first read the post.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
I’m in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll’s reaction is pretty much dead on. Even by the standards of the ethical realists Harris’s arguments just aren’t any good. As philosophy, they’d be unlikely to meet the standards for publication.
Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I’ve seen Harris says some interesting things on that score. But it’s hard to get excited when the thesis the book got publicized with is so flawed.
You seem to be dictating that theist beliefs and simulationist beliefs should not be collected together into the same reference class. (The reason for this dictat seems to be that you disrespect the one and are intrigued by the other—but never mind that.)
However, this does not seem to address the point which I think the OP was making. Which seems to be that arguments for (against) theism and arguments for (against) simulationism should be collected together in the same reference class. That if we do so, we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation. Yet (subjectively speaking) we don’t feel they have the same force.
Contempt for those with whom you disagree is one of the most dangerous traps facing an aspiring rationalist. I think that it would be a very good idea if the OP were to produce that posting on charity-in-interpretation which he mentioned.
we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation
I’ve argued rather extensively against religion on this website. Name a single one of those arguments which is equally effective against simulationism.
I’ve argued rather extensively against religion on this website.
That was my impression as well, but when I went looking for those arguments, they were very difficult to find. Perhaps my Google-fu is weak. Help from LW readers is welcome.
I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.
Name a single one of those arguments which is equally effective against simulationism.
Well, the only really clear-cut example of a posting-length argument against religion is based on the “argument from evil”. As such, it is clearly not equally effective against simulationism.
You did make a posting attempting to define the term “supernatural” in a way that struck me as a kind of special pleading tailored to exclude simulationism from the criticism that theism receives as a result of that definition.
This posting rejects the supernatural by defining it as ‘a belief in an explanatory entity which is fundamentally, ontologically mental’. And why is that definition so damning to the supernaturalist program? Well, as I understand it, it is because, by this definition, to believe in the supernatural is anti-reductionist, and a failure of reductionism is simply inconceivable.
I wonder why there is not such a visceral negative reaction to explanatory entities which are fundamentally, ontologically computational? Certainly it is not because we know of at least one reduction of computation. We also know of (or expect to someday know of) at least one reduction of mind.
But even though we can reduce computation, that doesn’t mean we have to reduce it. Respectable people have proposed to explain this universe as fundamentally a computational entity. Tegmark does something similar, speculating that the entire multiverse is essentially a Platonic mathematical structure. So, what justification exists to deprecate a cosmology based on a fundamental mental entity?
...
I only found one small item clearly supporting my claim. Eliezer, in a comment, makes this argument against creationists who invoke the Omphalos hypothesis
Never mind usefulness, it seems to me that “Evolution by natural selection occurs” and “God made the world and everything in it, but did so in such a way as to make it look exactly as if evolution by natural selection occured” are not the same hypothesis, that one of them is true and one of them is false, that it is simplicity that leads us to say which is which, and that we do, indeed, prefer the simpler of two theories that make the same predictions, rather than calling them the same theory.
I agree. But take a look at this famous paper by Bostrom. It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there. “Are you living in a computer simulation?” “Are we living in a world which only looks like it evolved?” Eliezer chose to post a comment answering the latter question with a no. He has not, so far as I know, done the same with Bostrom’s simulationist speculation.
I’ll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I’m not as convinced by razor and low-prior arguments, perhaps because I don’t understand them.)
The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer’s argument (if someone knows the post, I’ll link to it, there’s at least this); while you’re in the process of inventing things, there’s nothing preventing you from making your theory as grand as you want. Once you have your maybe-they’re-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.
I don’t have so much against pattern matching. I think it has it’s uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it’s an absurd means of epistemology. I think it’s amazing that religions go from ‘whoever made us must love us and want us to love the world’—which is a very natural pattern for humans to match—to this great detailed web of fabrication. In my opinion, the religions hang themselves with the details. We might speculate about what our creator would be like, but religions make up way too much stuff in way too much detail and then make it dogma. (I already knew the details were wrong, but I learned to recognize the made-up details as the symptom of lacking epistemology to begin with.)
Now that I recognize this pattern (the pattern of finding patterns that feel right, but which have no reason to be true) I see it other places too. It seems pattern matching will occur wherever there is a vacuum of the scientific method. Whenever we don’t know, we guess. I think it takes a lot of discipline to not feel compelled by guesses that resonate with your brain. (It seems it would help if your brain was wired a little differently so that the pattern didn’t resonate as well—but this is just a theory that sounds good.)
I also would like to see a link to that post, if anyone recognizes it.
I’ll agree that to (atheist) me, it certainly seems that one big support for religious belief is the natural human tendency toward wishful thinking. However, it doesn’t do much good to provide convincing arguments against religion as atheists picture it. You need convincing arguments against religion as its practitioners see it.
Once you have your maybe-they’re-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent.
Yeah, I know what you mean. Pity I can’t turn that around and use it against simulationism. :)
I found it: this is the post I meant. But it wasn’t written by Eliezer, sorry. (The comment I linked to in the grandparent that was resonates with this idea for me, and I might have seen more resonance in older posts.)
You need convincing arguments against religion as its practitioners see it.
I’m confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?
Pity I can’t turn that around and use it against simulationism. :)
Ha ha. Simulationism is of course a way cool idea. I think the compelling meme behind it though is that we’re being tricked or fooled by something playful. When you deviate from this pattern, the idea is less culturally compelling.
In particular, the word ‘simulation’ doesn’t convey much. If you just mean something that evolves according to rules, then our universe is apparently a simulation already anyway.
You need convincing arguments against religion as its practitioners see it.
I’m confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?
Whoops! Bad assumption on my part. Sorry. No, I am not particularly interested in turning theists into atheists either, though I am interested in rational persuasion techniques more generally.
“I think we can discern religion’s origins in superstition, which grew out of an overactive adoption of the intentional stance,” he says. “This is a mammalian feature that we share with, say, dogs. If your dog hears the thud of snow falling off the roof and jumps up and barks, the dog is in effect asking, ‘Who’s there?’ not, ‘What’s that?’ The dog is assuming there’s an agent causing the thud. It might be a dangerous agent. The assumption is that when something surprising, unexpected, puzzling happens, treat it as an agent until you learn otherwise. That’s the intentional stance. It’s instinctive.”
The intentional stance is appropriate for self-protection, Dennett explains, and “it’s on a hair trigger. You can’t afford to wait around. You want to have a lot of false positive, a lot of false alarms [...]”
He continues: “Now, the dog just goes back to sleep after a minute. But we, because we have language, we mull it over in our heads and pretty soon we’ve conjured up a hallucinated agent, say, a little forest god or a talking tree or an elf or something ghostly that made that noise. Generally, those are just harmless little quirks that we soon forget. But every now and then, one comes along that has a little bit more staying power. It’s sort of unforgettable. And so it grows. And we share it with a neighbor. And the neighbor says, ‘What do you mean, a talking tree? There’s no talking trees.’ And you say, ‘I could have sworn that tree was talking.’ Pretty soon, the whole village is talking about the talking tree.
Seeing patterns in noise and agency in patterns (especially fate) is probably a large factor in religious belief.
But what I was referring to by pattern matching was something different. Our cultural ideas about the world make lots of patterns, and there are natural ways to complete these patterns. When you hear the completion of these patterns, it can feel very correct, like something you already knew, or especially profound if it pulls together lots of memes.
For example, the Matrix is an idea that resonates with our culture. Everyone believes it on some level, or can relate to the world being like that. The movie was popular but the meme wasn’t the result of the movie—the meme was already there and the movie made it explicit and gave the idea a convenient handle. Human psychology plays a role. The Matrix as a concept has probably always been found in stories as a weak collective meme, but modern technology brought it more immediately and uniformly in our collective awareness.
I think religion is like that. A story that wrote itself from all the loose ends of what we already believe. Religious leaders are good at feeling and completing these collective patterns. Religion is probably in trouble because many of the memes are so anachronistic now. They survive to the extent that the ideas are based on psychology but the other stuff creates dissonance.
This isn’t something to reference (I’m sure there are zillions of books developing this) or a personal theory, it’s more or less a typical view about religion. It explains why there are so many religions differing in details (different things sounded good to different people) but with common threads. (Because the religions evolved together with overlapping cultures and reflect our common psychology.)
In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I’ll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because “God” is more burdensome than “simulator”, God is antireductionist and “simulator” is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than “the laws of physics in general are being approximated”.
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word ‘God’ without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it’s not an argument that what people sometimes mean by ‘God’ is a wrong idea. Saying ‘simulator’ is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)
Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
For many people the word “God” appears to just describe one’s highest conception of good, the north pole of morality. Such as: “God is Love” in Christianity.
From that perspective, I guess God is Rationality for many people here.
This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer’s posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
Trusting ones ‘gut’ impressions of the “nakedly obvious” like that and ‘leaving the details as an exercise’ is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.
But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn’t that what this community is all about?
But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.
Thx. But I don’t read that as arguing against religion. Instead it seems to be an argument against one feature of modern religion—its claim to unfalsifiability (since it deals with a Non-Overlapping MAgisterium, ‘NOMA’ using the common acronym). Eliezer thinks this is pretty wimpy. He seems to have more respect for old-time religion, like those priests of Baal who stuck their necks out, so to speak, and submitted their claims to empirical testing.
Can this attitude of critical rationalism be redeployed against simulationist claims? Or at least against the claims of those modern simulationists who keep their simulations unfalsifiable and don’t permit interaction between levels of reality? Against people like Bostrom who stipulate that the simulations that they multiply (without necessity) should all be indistinguishable from the real thing—at least to any simulated observer? I will leave that question to the reader. But I don’t think that it qualifies as a posting in which Eliezer argues against religion in toto. He is only arguing against one feature of modern apologetics.
The other part of the argument in that post is that existing religions are not only falsifiable, but have already been falsified by empirical evidence.
It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there.
A “Truman Show”-style simulation. Less burdensome on the details—but their main application seems likely to be entertainment. How entertaining are you?
I’ll have to review your arguments to provide a really well informed response. Please allow me roughly 24 hours. But in the meantime, I know I have seen arguments invoking Occam’s razor and “locating the hypothesis” here. I was under the impression that some of those were yours. As I understand those arguments, they apply equally well to theism and simulationism. That is, they don’t completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors.
Occam’s razor weighs heavily against theism and simulism—for very similar reasons.
Probably a bit more heavily against theism, though. That has a bunch of additional razor-violating nonsense associated with it. It does not seem too unreasonable to claim that the razor weighs more heavily against theism.
arguments invoking Occam’s razor [...] don’t completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors
“Decoherence is Simple” seems relevant here. It’s about the many-worlds interpretation, but the application to simulation arguments should be fairly straightforward.
I’m afraid I don’t see the application to simulation arguments. You will have to spell it out.
I fully agree with EY that Occam is not a valid argument against MWI. For that matter, I don’t even see it as a valid argument against the Tegmark Ultimate Ensemble. But I do see it as a valid argument against either a Creator (unneeded entity) or a Simulator (also an unneeded entity). The argument against our being part of a simulation is weakened only if we already know that simulations of universes as rich as ours are actually taking place. But we don’t know that. We don’t even know that it is physically and logically possible.
Nevertheless, your mention of MWI and simulation in the same posting brings to mind a question that has always bugged me. Are simulations understood to cover all Everett branches of the simulated world? And if they are understood to cover all branches, is that broad coverage achieved within a single (narrow) Everett branch of the universe doing the simulating?
I’m afraid I don’t see the application to simulation arguments. You will have to spell it out.
My thought was that the post linked in the grandparent argues that we should prefer logically simpler theories but not penalize theories just because they posit unobservable entities, and that some simple theories predict the existence of a simulator.
We don’t even know that [simulations rich enough to explain our experiences are] physically and logically possible.
Yes, the possibility of simulations is taken as a premise of the simulation argument; if you doubt it, then it makes sense to doubt the simulation argument as well.
some simple theories predict the existence of a simulator.
Perhaps we are using the word “simple” in different ways. Bostrom’s assumption is the existence of an entity who wishes to simulate human minds in a way that convinces them that they exist in a giant expanding universe rather than a simulation. How is that “simple”? And, more to the point raised by the OP, how is it simpler than the notion of a Creator who created the universe so as to have some company “in His image and likeness”.
Bostrom is saying that if advanced civilizations have access to enormous amounts of computing power and for some reason want to simulate less-advanced civilizations, then we should expect that we’re in one of the simulations rather than basement-level reality, because the simulations are more numerous. The simulator isn’t an arbitrarily tacked-on detail; rather, it follows from other assumptions about future technologies and anthropic reasoning. These other assumptions might be denied: perhaps simulations are impossible, or maybe anthropic reasoning doesn’t work that way—but they seem more plausible and less gerrymandered than traditional theism.
I’m not convinced of it for a few reasons, but I’d consider it located at least.
I would express my opinion of that argument using less litotes. But as to locating the hypotheses, I suppose I agree.
Which leads me to ask, have you read the catechism? Like most Catholic schoolchildren, I was encouraged to memorize much of it in elementary school, though I have since forgotten almost all of it. It also locates one hypothesis, a hypothesis considerably more popular than Bostrom’s.
Perhaps I missed the point of your recommendation. That article by Eliezer seems to argue against the existence of a benevolent God who allows evil and death but does not balance this by endowing humans with immortal souls. Since at least 95% of those who worship Jehovah (to say nothing of Hindus) understand the Deity quite differently, I don’t really see the relevance.
But while I am speaking to you, I’m curious as to whether (in my grandfather comment) I correctly captured the point of your OP?
we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation.
From what I’ve seen, the primary argument for simulationism is anthropic: if simulating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more simulations out there than “basement realities”, so we’re probably in a simulation. What effect MWI has on this, and what other arguments are out there, I don’t know.
Typical atheist arguments focus on it not being necessary for god to exist to explain what we see, and this coupled with a low prior makes theism unjustified—basically the “argument from no good evidence in favor”. This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that’s one more good argument than theism has.
If creating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more creations out there than “basement realities”, so we’re probably in a creation.
This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that’s one more good argument than theism has.
Luckily for the preservation of my atheism, I don’t find the ‘anthropic argument’ for the simulation good. And I put the scare quotes there, because I don’t think this is what is usually known as an anthropic argument.
This is mere distortion of both the common informal use and advanced formal definitions of the word “atheism”, which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.
What I think of as the informal definition of atheism is something like “the state of not believing in God or gods”. I believe in gods and God, and I take this into account in my human approximation of a decision theory. I’m not yet sure what their intentions are, and I’m not inclined to worship them yet, but by my standards I’m definitely not an atheist. What is your definition of atheism such that it is meaningfully different from ‘not religious’? Why are we throwing a good word like ‘theism’ into the heap of wrong ideas? It’s like throwing out ‘singularity’ because most people pattern match it to Kurzweil, despite the smartest people having perfectly legitimate beliefs about it.
It doesn’t really matter, I just think that it’s sad that so many rationalists consider themselves atheists when by reasonable definition it seems they definitely are not, even if atheism has more correct connotations than the alternatives (though I call myself a Buddhist, which makes the problem way easier). Perhaps I am not seeing the better definition?
It’s like throwing out ‘singularity’ because most people pattern match it to Kurzweil
Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that people at SIAI were considering renaming it for related reasons.
Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that SIAI was considering a name change for related reasons.
Here’s the one I remembered (there may have been a couple of other mentions):
Hollerith, if by that you’re referring to the mutant alternate versions of the “Singularity” that have taken over public mindshare, then we can be glad that despite the millions of dollars being poured into them by certain parties, the public has been reluctant to uptake. Still, the Singularity Institute may have to change its name at some point—we just haven’t come up with a really good alternative.
(I agree with this, but do not have a better name to propose.)
I think they’re going to drop the ‘for Artificial Intelligence’ part, but I think they’re keeping the ‘Singularity’ part, since they’re interested in other things besides seed AI that are traditionally ‘Singularitarian’. (Side note: I’m not sure if I should use ‘we’ or ‘they’. I think ‘they’. Nobody at SIAI wants to speak for SIAI, since SIAI is very heterogenous. And anyway I’m just a Visiting Fellow.) The social engineering aspects of the problem are complicated. Accuracy, or memorability? Rationalists should win, after all...
If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!
…
This is mere distortion of both the common informal use and advanced formal definitions of the word “atheism”, which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.
It bothers me when an easily researched, factually incorrect statement is upvoted so many times. There are many different definitions of atheism, but one good one might be:
The book does not define personal or transcendent, but it is unlikely that either would exclude “god is an extradimensional being who created us using a simulation” as a theistic argument. For example, one likely definition of transcendent is:
transcendent: the realm of thought which lies beyond the boundary of possible knowledge, because it consists of objects which cannot be presented to us in intuition-i.e., objects which we can never experience with our senses (sometimes called noumena). The closest we can get to gaining knowledge of the transcendent realm is to think about it by means of ideas. (The opposite of ‘transcendent’ is ‘immanent’.)
[http://staffweb.hkbu.edu.hk/ppp/ksp1/KSPglos.html]
Beings living outside the simulation would definitely qualify as transcendent since we have no way of experiencing their universe. To be clear, I am not saying this is the only possible definition of atheism. I am only saying that it is one reasonable definition of atheism, and to claim that it is not a definition, as Eliezer’s post has done, is factually incorrect.
“Gods are ontologically distinct from creatures, or they’re not worth the paper they’re written on.”—Damien Broderick
Most upper ontologies allow no such ontological distinction. E.g. my default ontology is algorithmic information theory, which allows for tons of things that look like gods.
I agree with the rest of your comment, though. I don’t know what ‘worship’ means yet (is it just having lots of positive affect towards something?), but it makes for a good distinction between religion and not-quite-religion.
Time for me to reread A Human’s Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.
Primarily because I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy. Secondarily because the language is culturally rich. Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it’s fun to rediscover those concepts and find their naturalistic basis. Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments. Zerothly (I forgot the most important reason!) it is easier to speak in such a way, which makes it easier to see implications and decompartmentalize knowledge. Senarily it is more aesthetic than rationalistic jargon.
I agree, though I was describing the case where I can do both simultaneously (when I’m talking to people who either don’t mind or join in on the fun). This post was more an example of just not realizing that the use of the word ‘theism’ would have such negative and distracting connotations.
Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it’s fun to rediscover those concepts and find their naturalistic basis.
Except I think it’s safe to say this sort of thing typically isn’t what they mean, merely what they perhaps might mean if they were thinking more clearly. And it’s not at all clear how you could find analogs to the more concrete religious ideas (e.g. chakras or the holy trinity).
Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments.
If the person would violently disagree that this is in fact what they intended to say, I’m not sure it can be called “charity of interpretation” anymore. And while I agree steel-manning of bad arguments is important, to do it to such an extent seems to be essentially allowing your attention to be hijacked by anyone with a hypothesis to privilege.
That advice makes sense for general audiences. Your average Christian might read a version of the Simulation argument written with theistic language as an endorsement of their beliefs. But I really doubt posters here would.
Frank Tipler actually produced a simulation argument as an endorsement of Christian belief. Along with some interesting cosmology making it possible for this universe to simulate itself! (It’s easy when the accessible quantity of computronium tends to infinity as the age of the universe approaches its limit.) In Tipler’s theory, God may not exist yet, but a kind of Singularity will create Him.
Of course, the average Christian has not yet heard of Tipler, nor would said Christian accept the endorsement. But it is out there.
One issue I’ve never understood about Tipler is how he got from theism to Christianity using the Omega Point argument. It seems very similar to the SMBC cartoon Eliezer already linked to. Tipler’s argument is a plausibility argument for maybe, something, sort of like a deity if you squint at it. Somehow that then gives rise to Christianity with the theology along with it.
It’s worth pointing out that we now know that the universe’s expansion is accelerating, which would rule out the omega point even if it were plausible before.
IIRC, Tipler had that covered. A universe of infinite duration allows us to use eons of future time to simulate a single second of time in the current era. Something like the hotel with infinitely many rooms.
But please don’t ask me to actually defend Tipler’s mumbo-jumbo.
I don’t think it can be defended any more. I picked it up a few weeks ago, read a few chapters, and thought, do I want to read any more given that he requires the universe to be closed? Dark energy would seem to forbid a Big Crunch and render even the early parts of his model moot.
Sweet! Wikipedia’s image for Physical Cosmology, including your Dark Energy link, is the cosmic microwave background map from the WMAP mission. That was the first mission I worked with NASA. My job, as junior-underling attitude control engineer, was to come up with some way to salvage the medium cost, medium-risk mission if a certain part failed, and to help babysit the spacecraft during the least fun midnight-to-noon shift. Still, it feels good to have been a tiny part of something that has made a difference in how we understand our universe.
Disclaimer: My unofficial opinions, not NASA’s. Blah, blah, blah.
If you assume a Tegmark multiverse — that all definable entities actually exist — then it seems to follow that:
All malicious deprivation — some mind recognizing another mind’s definable possible pleasure, and taking steps to deny that mind’s pleasure — implies the actual existence of the pleasure it is intended to deprive;
All benevolent relief — some mind recognizing another mind’s definable possible suffering, and taking steps to alleviate that suffering — implies the actual existence of the suffering it is intended to relieve.
It does not follow from the fact that I am motivated to prevent certain kinds of suffering/pleasure, that said suffering/pleasure is “definable” in the sense I think you mean it here. That is, my brain is sufficiently screwy that it’s possible for me to want to prevent something that isn’t actually logically possible in the first place.
Since religions are human inventions, I would guess that any comprehensive simulation program already produces all conceivable religions.
But I’m guessing that you meant to talk about the simulation of all conceivable gods. That is another matter entirely. Even with unlimited computronium, you can only simulate possible gods—gods not entailing any logical contradictions. There may not be any such gods.
This doesn’t affect Tipler’s argument though. Tipler does not postulate God as simulated. Tipler postulates God as the simulator.
I’m not sure. I only read the first book—“Physics of Immortality”. But I would suppose that he doesn’t actually try to prove the truth of Christianity—he might be satisfied to simply make Christian doctrine seem less weird and impossible.
There’s a buttload of thinking that’s been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don’t think it is.
(For any discredited theory along the lines of gods or astrology, you want to focus on its advocates from the past more than from the present, because the past is when the world’s best minds were unironically into these things.)
There’s a buttload of thinking that’s been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true.
Theres also the opportunity for a kind of metatheology- which might lead to some really interesting insights into humans and how they relate to the world.
There’s a buttload of thinking that’s been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don’t think it is.
Tangentially, it’s important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say. (Christ more than His disciples, Buddha more than Zen practitioners, Freud and Jung more than their followers, et cetera.) Many people who are now considered brilliant/inspiring had something legitimately interesting to say. History is a decent filter for intellectual quality.
That said, everything you’d ever need to know is covered by a combination of Terence McKenna and Gautama Buddha. ;)
Tangentially, it’s important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say.
This doesn’t follow. The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn’t have any obvious correlation with intelligence.
I’d also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.
The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn’t have any obvious correlation with intelligence.
You’re mostly right; upvoted. I suppose I was thinking primarily of Buddhism, which was pretty damn exceptional in this regard. Buddha was ridiculously prodigious. There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I’m not aware of them. Actually, if anyone has links to interesting writing from smart non-Sufi Muslims, I’d be interested.
I’d also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.
This kind of depends on criteria for success. If number of adherents is what matters then I agree, if correctness is what matters then it’s probably a very similar skill set. Look at what postmodernists would probably call Eliezer’s Singularity subreligion, for instance.
There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I’m not aware of them.
There’s a serious problem with this in Christianity in that you have to figure out what the founder actual said in the first place, which is very much an open problem concerning Christianity (and perhaps Bhuddism as well but I am less familiar with it at the moment).
For example, just this century with the rediscovery of the Gospel of Thomas you get a whole new set of information which is .. challenging to integrate to say the least, and also very interesting.
About half of the sayings are different (usually earlier, better) versions of stuff already in the synoptics, but there are some new gems—check out 22:
When you make the two into one, and when you make the inner like the outer and the outer like the inner, and the upper like the lower, and when you make male and female into a single one, so that the male will not be male nor the female be female, when you make eyes in place of an eye, a hand in place of a hand, a foot in place of a foot, an image in place of an image, then you will enter [the kingdom]
Or 108:
Whoever drinks from my mouth will become like me; I myself shall become that person, and the hidden things will be revealed to him
Those are certainly things that weren’t in the bible before that people would have put a lot of work into interpreting if they had been, but “gems” is not the word I’d use.
Time for me to reread A Human’s Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.
Just be careful of true believers that may condemn you for heresy for using the other tribe’s jargon! ;)
‘Worship’ or ‘Elder Rituals’ could not be reasonably construed as a relevant reply to your thread.
‘Worship’ or ‘Elder Rituals’ could not be reasonably construed as a relevant reply to your thread.
Eliezer is trying to define theism to mean religion, I think, so that atheism is still a defensible state of belief. I guess I’m okay with this, but it makes me sad to lose what I saw as a perfectly good word.
I know one isn’t supposed to use web comics to argue a point, but I’ve always found SMBC is the exception to that rule. Maybe not always to get the point across so much as to lighten the mood.
When I want to discuss something, I use a relevant SMBC comic to get people to locate the thing I am talking about. I say decision theory ethics, people glaze over. I link this and they get it immediately.
Not relevant: when people want to use god-particles, etc, to justify belief in God, I use this. It is significantly more effective than any argument I’ve employed.
Yes. Next. I think this post demonstrates the need for downvotes to be a a greater than 1.0 multiple of upvotes. What argument is there otherwise other than the status quo?
What argument is there otherwise other than the status quo?
To the extent that positive karma is a reward for the poster and an indication of what people desire to see (both very true), we should not expect a distribution about the mean of zero. If the average comment is desirable and deserving of reward, then the average comment will be upvoted.
I didn’t say anything about centering on zero, and agree that would be incorrect. However, modification to the current method is likely challenging and no one’s actually going to do any novel karma engineering here so it was a silly comment for me to make.
“Gods are ontologically distinct from creatures, or they’re not worth the paper they’re written on.”—Damien Broderick
If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!
There’s also no hint of worship which everyone else on the planet thinks is a key part of the definition of a religion; if you believe that Cthulhu exists but not Jehovah, and you hate and fear Cthulhu and don’t engage in any Elder Rituals, you may be superstitious but you’re not yet religious.
This is mere distortion of both the common informal use and advanced formal definitions of the word “atheism”, which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.
Also http://www.smbc-comics.com/index.php?db=comics&id=1817
A Simulator would be ontologically distinct from creatures like us—for any definition of ontologically distinct I can imagine wanting use. The Simulation Hypothesis is a metaphysical hypothesis in the most literal sense- it’s a hypothesis about what our physical universe really is, beyond the wave function.
Yeah, Will’s theism in this post isn’t the theism of believers, priests or academic theologians. And with certain audiences confusion would likely result and so this language should be avoided with those audiences. But I think we’re somewhat more sophisticated than that- and if there are reasons to use theistic vocabulary then I don’t see why we shouldn’t. I’m assuming Will has these reasons, of course.
Keep in mind, the divine hasn’t always been supernatural. Greek gods were part of natural explanations of phenomena, Aristotle’s god was just there to provide a causal stopping place, Hobbes’s god was physical, etc. We don’t have to cow-tow to the usage of present religious authorities. God has always been a flexible word, there is no particular reason to take modern science to be falsifying God instead of telling us what a god, if one exists, must be like.
I feel like we lose out on interesting discussions here where someone says something that pattern matches to something an evangelical apologist might say. It’s like we’re all of a sudden worried about losing a debate with a Christian instead of entertaining and discussing interesting ideas. We’re among friends here, we don’t need to worry about how we frame a discussion so much.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It’s time we discussed this rationally.
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
My views on “reclaiming” theism are summed up by ata’s previous comment:
Have you read Less Wrong’s metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.
Sean Carroll, on the other hand, gets absolutely everything wrong.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
and this one of Eliezer’s:
seem to express the same sentiment, to me.
If you really object to Sean’s writing, take a look at Russell Blackford’s review of the book. (He is a philosopher, and a transhumanist one at that.)
To be accurate Harris should have inserted the word “Instrumental” before “Values” in his book’s title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
The difference is huge. Eliezer and I do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
The quantum numbers which an electron possesses are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
I understand what Eliezer means when he says:
but he later says
That’s what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what’s morally right based on terminal values, but we can’t find terminal values that are objectively right in that they exist whether or not we do.
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
Yes, I should have been more careful with my language. Thanks for pointing it out. Edited.
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.
I’m saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.
Do you believe it is true that “For every natural number x, x = x”? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to “For every natural number x, x != x”?
Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don’t want to. Sorry.
In quick approximation, what was this conclusion?
That terminal values are like axioms, not like theorems. That is, they’re the things without which you cannot actually ask the question, “Is this true?”
You can say or write the words “Is”, “this”, and “true” without having axioms related to that question somewhere in your mind, of course, but you can’t mean anything coherent by the sentence. Someone who asks, “Why terminal value A rather than terminal value B?” and expects (or gives) an answer other than “Because of terminal value A, obviously!”* is confused.
*That’s assuming that A really is a terminal value of the person’s moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I try not to rationalize.
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it’s better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will’s position were pretty persuasive in this thread. The OP’s score went up from where it was when I first read the post.
I’m in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll’s reaction is pretty much dead on. Even by the standards of the ethical realists Harris’s arguments just aren’t any good. As philosophy, they’d be unlikely to meet the standards for publication.
Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I’ve seen Harris says some interesting things on that score. But it’s hard to get excited when the thesis the book got publicized with is so flawed.
You seem to be dictating that theist beliefs and simulationist beliefs should not be collected together into the same reference class. (The reason for this dictat seems to be that you disrespect the one and are intrigued by the other—but never mind that.)
However, this does not seem to address the point which I think the OP was making. Which seems to be that arguments for (against) theism and arguments for (against) simulationism should be collected together in the same reference class. That if we do so, we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation. Yet (subjectively speaking) we don’t feel they have the same force.
Contempt for those with whom you disagree is one of the most dangerous traps facing an aspiring rationalist. I think that it would be a very good idea if the OP were to produce that posting on charity-in-interpretation which he mentioned.
Next!
I’ve argued rather extensively against religion on this website. Name a single one of those arguments which is equally effective against simulationism.
That was my impression as well, but when I went looking for those arguments, they were very difficult to find. Perhaps my Google-fu is weak. Help from LW readers is welcome.
I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.
Well, the only really clear-cut example of a posting-length argument against religion is based on the “argument from evil”. As such, it is clearly not equally effective against simulationism.
You did make a posting attempting to define the term “supernatural” in a way that struck me as a kind of special pleading tailored to exclude simulationism from the criticism that theism receives as a result of that definition.
This posting rejects the supernatural by defining it as ‘a belief in an explanatory entity which is fundamentally, ontologically mental’. And why is that definition so damning to the supernaturalist program? Well, as I understand it, it is because, by this definition, to believe in the supernatural is anti-reductionist, and a failure of reductionism is simply inconceivable.
I wonder why there is not such a visceral negative reaction to explanatory entities which are fundamentally, ontologically computational? Certainly it is not because we know of at least one reduction of computation. We also know of (or expect to someday know of) at least one reduction of mind.
But even though we can reduce computation, that doesn’t mean we have to reduce it. Respectable people have proposed to explain this universe as fundamentally a computational entity. Tegmark does something similar, speculating that the entire multiverse is essentially a Platonic mathematical structure. So, what justification exists to deprecate a cosmology based on a fundamental mental entity?
...
I only found one small item clearly supporting my claim. Eliezer, in a comment, makes this argument against creationists who invoke the Omphalos hypothesis
I agree. But take a look at this famous paper by Bostrom. It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there. “Are you living in a computer simulation?” “Are we living in a world which only looks like it evolved?” Eliezer chose to post a comment answering the latter question with a no. He has not, so far as I know, done the same with Bostrom’s simulationist speculation.
I’ll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I’m not as convinced by razor and low-prior arguments, perhaps because I don’t understand them.)
The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer’s argument (if someone knows the post, I’ll link to it, there’s at least this); while you’re in the process of inventing things, there’s nothing preventing you from making your theory as grand as you want. Once you have your maybe-they’re-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.
I don’t have so much against pattern matching. I think it has it’s uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it’s an absurd means of epistemology. I think it’s amazing that religions go from ‘whoever made us must love us and want us to love the world’—which is a very natural pattern for humans to match—to this great detailed web of fabrication. In my opinion, the religions hang themselves with the details. We might speculate about what our creator would be like, but religions make up way too much stuff in way too much detail and then make it dogma. (I already knew the details were wrong, but I learned to recognize the made-up details as the symptom of lacking epistemology to begin with.)
Now that I recognize this pattern (the pattern of finding patterns that feel right, but which have no reason to be true) I see it other places too. It seems pattern matching will occur wherever there is a vacuum of the scientific method. Whenever we don’t know, we guess. I think it takes a lot of discipline to not feel compelled by guesses that resonate with your brain. (It seems it would help if your brain was wired a little differently so that the pattern didn’t resonate as well—but this is just a theory that sounds good.)
I also would like to see a link to that post, if anyone recognizes it.
I’ll agree that to (atheist) me, it certainly seems that one big support for religious belief is the natural human tendency toward wishful thinking. However, it doesn’t do much good to provide convincing arguments against religion as atheists picture it. You need convincing arguments against religion as its practitioners see it.
Yeah, I know what you mean. Pity I can’t turn that around and use it against simulationism. :)
I found it: this is the post I meant. But it wasn’t written by Eliezer, sorry. (The comment I linked to in the grandparent that was resonates with this idea for me, and I might have seen more resonance in older posts.)
I’m confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?
Ha ha. Simulationism is of course a way cool idea. I think the compelling meme behind it though is that we’re being tricked or fooled by something playful. When you deviate from this pattern, the idea is less culturally compelling.
In particular, the word ‘simulation’ doesn’t convey much. If you just mean something that evolves according to rules, then our universe is apparently a simulation already anyway.
Thx. That is a good posting. As was the posting to which it responded
Whoops! Bad assumption on my part. Sorry. No, I am not particularly interested in turning theists into atheists either, though I am interested in rational persuasion techniques more generally.
Dennett tells a similar “agentification” story:
I think that is usually called Patternicity these days. See:
Seeing patterns in noise and agency in patterns (especially fate) is probably a large factor in religious belief.
But what I was referring to by pattern matching was something different. Our cultural ideas about the world make lots of patterns, and there are natural ways to complete these patterns. When you hear the completion of these patterns, it can feel very correct, like something you already knew, or especially profound if it pulls together lots of memes.
For example, the Matrix is an idea that resonates with our culture. Everyone believes it on some level, or can relate to the world being like that. The movie was popular but the meme wasn’t the result of the movie—the meme was already there and the movie made it explicit and gave the idea a convenient handle. Human psychology plays a role. The Matrix as a concept has probably always been found in stories as a weak collective meme, but modern technology brought it more immediately and uniformly in our collective awareness.
I think religion is like that. A story that wrote itself from all the loose ends of what we already believe. Religious leaders are good at feeling and completing these collective patterns. Religion is probably in trouble because many of the memes are so anachronistic now. They survive to the extent that the ideas are based on psychology but the other stuff creates dissonance.
This isn’t something to reference (I’m sure there are zillions of books developing this) or a personal theory, it’s more or less a typical view about religion. It explains why there are so many religions differing in details (different things sounded good to different people) but with common threads. (Because the religions evolved together with overlapping cultures and reflect our common psychology.)
In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I’ll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because “God” is more burdensome than “simulator”, God is antireductionist and “simulator” is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than “the laws of physics in general are being approximated”.
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
Extended digression about how to adjust Solomonoff induction for making anthropic predictions plz
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word ‘God’ without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it’s not an argument that what people sometimes mean by ‘God’ is a wrong idea. Saying ‘simulator’ is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)
Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
For many people the word “God” appears to just describe one’s highest conception of good, the north pole of morality. Such as: “God is Love” in Christianity.
From that perspective, I guess God is Rationality for many people here.
People might say that, but they don’t actually believe it. They’re just trying to obfuscate the fact that they believe something insane.
This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer’s posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).
Delicious delicious metacontrarianism...
It’s like Scholastic mad-libs!
Preferences are entangled with beliefs, certainly, but I don’t see why I would consder them to be knowledge.
What is your operational definition of knowledge?
Trusting ones ‘gut’ impressions of the “nakedly obvious” like that and ‘leaving the details as an exercise’ is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.
But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn’t that what this community is all about?
But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
Seconding Kevin’s request. Seeing a sentence like that with no followup is very frustrating.
The post you are looking for is Religion’s Claim to be Non-Disprovable
Thx. But I don’t read that as arguing against religion. Instead it seems to be an argument against one feature of modern religion—its claim to unfalsifiability (since it deals with a Non-Overlapping MAgisterium, ‘NOMA’ using the common acronym). Eliezer thinks this is pretty wimpy. He seems to have more respect for old-time religion, like those priests of Baal who stuck their necks out, so to speak, and submitted their claims to empirical testing.
Can this attitude of critical rationalism be redeployed against simulationist claims? Or at least against the claims of those modern simulationists who keep their simulations unfalsifiable and don’t permit interaction between levels of reality? Against people like Bostrom who stipulate that the simulations that they multiply (without necessity) should all be indistinguishable from the real thing—at least to any simulated observer? I will leave that question to the reader. But I don’t think that it qualifies as a posting in which Eliezer argues against religion in toto. He is only arguing against one feature of modern apologetics.
The other part of the argument in that post is that existing religions are not only falsifiable, but have already been falsified by empirical evidence.
A “Truman Show”-style simulation. Less burdensome on the details—but their main application seems likely to be entertainment. How entertaining are you?
I’ll have to review your arguments to provide a really well informed response. Please allow me roughly 24 hours. But in the meantime, I know I have seen arguments invoking Occam’s razor and “locating the hypothesis” here. I was under the impression that some of those were yours. As I understand those arguments, they apply equally well to theism and simulationism. That is, they don’t completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors.
Occam’s razor weighs heavily against theism and simulism—for very similar reasons.
Probably a bit more heavily against theism, though. That has a bunch of additional razor-violating nonsense associated with it. It does not seem too unreasonable to claim that the razor weighs more heavily against theism.
“Decoherence is Simple” seems relevant here. It’s about the many-worlds interpretation, but the application to simulation arguments should be fairly straightforward.
I’m afraid I don’t see the application to simulation arguments. You will have to spell it out.
I fully agree with EY that Occam is not a valid argument against MWI. For that matter, I don’t even see it as a valid argument against the Tegmark Ultimate Ensemble. But I do see it as a valid argument against either a Creator (unneeded entity) or a Simulator (also an unneeded entity). The argument against our being part of a simulation is weakened only if we already know that simulations of universes as rich as ours are actually taking place. But we don’t know that. We don’t even know that it is physically and logically possible.
Nevertheless, your mention of MWI and simulation in the same posting brings to mind a question that has always bugged me. Are simulations understood to cover all Everett branches of the simulated world? And if they are understood to cover all branches, is that broad coverage achieved within a single (narrow) Everett branch of the universe doing the simulating?
My thought was that the post linked in the grandparent argues that we should prefer logically simpler theories but not penalize theories just because they posit unobservable entities, and that some simple theories predict the existence of a simulator.
Yes, the possibility of simulations is taken as a premise of the simulation argument; if you doubt it, then it makes sense to doubt the simulation argument as well.
Perhaps we are using the word “simple” in different ways. Bostrom’s assumption is the existence of an entity who wishes to simulate human minds in a way that convinces them that they exist in a giant expanding universe rather than a simulation. How is that “simple”? And, more to the point raised by the OP, how is it simpler than the notion of a Creator who created the universe so as to have some company “in His image and likeness”.
Bostrom is saying that if advanced civilizations have access to enormous amounts of computing power and for some reason want to simulate less-advanced civilizations, then we should expect that we’re in one of the simulations rather than basement-level reality, because the simulations are more numerous. The simulator isn’t an arbitrarily tacked-on detail; rather, it follows from other assumptions about future technologies and anthropic reasoning. These other assumptions might be denied: perhaps simulations are impossible, or maybe anthropic reasoning doesn’t work that way—but they seem more plausible and less gerrymandered than traditional theism.
Have you read the paper? I’m not convinced of it for a few reasons, but I’d consider it located at least.
Yes, I had read Bostrom’s paper.
I would express my opinion of that argument using less litotes. But as to locating the hypotheses, I suppose I agree.
Which leads me to ask, have you read the catechism? Like most Catholic schoolchildren, I was encouraged to memorize much of it in elementary school, though I have since forgotten almost all of it. It also locates one hypothesis, a hypothesis considerably more popular than Bostrom’s.
My new word of the day. It’s not a bad one!
(Somewhat related: for those that haven’t seen it, Eliezer’s Beyond the Reach of God is an excellent article.)
Perhaps I missed the point of your recommendation. That article by Eliezer seems to argue against the existence of a benevolent God who allows evil and death but does not balance this by endowing humans with immortal souls. Since at least 95% of those who worship Jehovah (to say nothing of Hindus) understand the Deity quite differently, I don’t really see the relevance.
But while I am speaking to you, I’m curious as to whether (in my grandfather comment) I correctly captured the point of your OP?
From what I’ve seen, the primary argument for simulationism is anthropic: if simulating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more simulations out there than “basement realities”, so we’re probably in a simulation. What effect MWI has on this, and what other arguments are out there, I don’t know.
Typical atheist arguments focus on it not being necessary for god to exist to explain what we see, and this coupled with a low prior makes theism unjustified—basically the “argument from no good evidence in favor”. This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that’s one more good argument than theism has.
If creating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more creations out there than “basement realities”, so we’re probably in a creation.
Luckily for the preservation of my atheism, I don’t find the ‘anthropic argument’ for the simulation good. And I put the scare quotes there, because I don’t think this is what is usually known as an anthropic argument.
“Powerful aliens” has connotations that may be even more inaccurate; it makes me think of Klingon warlords or something.
What I think of as the informal definition of atheism is something like “the state of not believing in God or gods”. I believe in gods and God, and I take this into account in my human approximation of a decision theory. I’m not yet sure what their intentions are, and I’m not inclined to worship them yet, but by my standards I’m definitely not an atheist. What is your definition of atheism such that it is meaningfully different from ‘not religious’? Why are we throwing a good word like ‘theism’ into the heap of wrong ideas? It’s like throwing out ‘singularity’ because most people pattern match it to Kurzweil, despite the smartest people having perfectly legitimate beliefs about it.
It doesn’t really matter, I just think that it’s sad that so many rationalists consider themselves atheists when by reasonable definition it seems they definitely are not, even if atheism has more correct connotations than the alternatives (though I call myself a Buddhist, which makes the problem way easier). Perhaps I am not seeing the better definition?
Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that people at SIAI were considering renaming it for related reasons.
Here’s the one I remembered (there may have been a couple of other mentions):
(I agree with this, but do not have a better name to propose.)
I think they’re going to drop the ‘for Artificial Intelligence’ part, but I think they’re keeping the ‘Singularity’ part, since they’re interested in other things besides seed AI that are traditionally ‘Singularitarian’. (Side note: I’m not sure if I should use ‘we’ or ‘they’. I think ‘they’. Nobody at SIAI wants to speak for SIAI, since SIAI is very heterogenous. And anyway I’m just a Visiting Fellow.) The social engineering aspects of the problem are complicated. Accuracy, or memorability? Rationalists should win, after all...
You could go with “it” and sidestep the problem.
Thanks!
It bothers me when an easily researched, factually incorrect statement is upvoted so many times. There are many different definitions of atheism, but one good one might be:
The book does not define personal or transcendent, but it is unlikely that either would exclude “god is an extradimensional being who created us using a simulation” as a theistic argument. For example, one likely definition of transcendent is:
Beings living outside the simulation would definitely qualify as transcendent since we have no way of experiencing their universe. To be clear, I am not saying this is the only possible definition of atheism. I am only saying that it is one reasonable definition of atheism, and to claim that it is not a definition, as Eliezer’s post has done, is factually incorrect.
Most upper ontologies allow no such ontological distinction. E.g. my default ontology is algorithmic information theory, which allows for tons of things that look like gods.
I agree with the rest of your comment, though. I don’t know what ‘worship’ means yet (is it just having lots of positive affect towards something?), but it makes for a good distinction between religion and not-quite-religion.
Time for me to reread A Human’s Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.
I’m curious to know why you prefer this language. I kind of like it too, but can’t really put a finger on why.
Primarily because I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy. Secondarily because the language is culturally rich. Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it’s fun to rediscover those concepts and find their naturalistic basis. Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments. Zerothly (I forgot the most important reason!) it is easier to speak in such a way, which makes it easier to see implications and decompartmentalize knowledge. Senarily it is more aesthetic than rationalistic jargon.
I agree that verbal masturbation is fun, but it’s not helpful when you’re tying to actually communicate with people. Consider purchasing contrarian glee and communication separately.
That’s a good point, but where do you recommend getting contrarian glee separate from communication?
Cached thoughts: Crackpot Theory (48 readers)? Closet Survey, The Strangest Thing An AI Could Tell You, The Irrationality Game? Omegle?
I wish crackpot theories were considered a legitimate form of art. They’re like fantasy worldbuilding but better.
Here, of course.
I agree, though I was describing the case where I can do both simultaneously (when I’m talking to people who either don’t mind or join in on the fun). This post was more an example of just not realizing that the use of the word ‘theism’ would have such negative and distracting connotations.
Except I think it’s safe to say this sort of thing typically isn’t what they mean, merely what they perhaps might mean if they were thinking more clearly. And it’s not at all clear how you could find analogs to the more concrete religious ideas (e.g. chakras or the holy trinity).
If the person would violently disagree that this is in fact what they intended to say, I’m not sure it can be called “charity of interpretation” anymore. And while I agree steel-manning of bad arguments is important, to do it to such an extent seems to be essentially allowing your attention to be hijacked by anyone with a hypothesis to privilege.
I think Ben from TakeOnIt put it well:
There’s definitely something deeply appealing about theistic language. That’s what makes it so dangerous.
That advice makes sense for general audiences. Your average Christian might read a version of the Simulation argument written with theistic language as an endorsement of their beliefs. But I really doubt posters here would.
Frank Tipler actually produced a simulation argument as an endorsement of Christian belief. Along with some interesting cosmology making it possible for this universe to simulate itself! (It’s easy when the accessible quantity of computronium tends to infinity as the age of the universe approaches its limit.) In Tipler’s theory, God may not exist yet, but a kind of Singularity will create Him.
Of course, the average Christian has not yet heard of Tipler, nor would said Christian accept the endorsement. But it is out there.
One issue I’ve never understood about Tipler is how he got from theism to Christianity using the Omega Point argument. It seems very similar to the SMBC cartoon Eliezer already linked to. Tipler’s argument is a plausibility argument for maybe, something, sort of like a deity if you squint at it. Somehow that then gives rise to Christianity with the theology along with it.
It’s worth pointing out that we now know that the universe’s expansion is accelerating, which would rule out the omega point even if it were plausible before.
IIRC, Tipler had that covered. A universe of infinite duration allows us to use eons of future time to simulate a single second of time in the current era. Something like the hotel with infinitely many rooms.
But please don’t ask me to actually defend Tipler’s mumbo-jumbo.
I don’t think it can be defended any more. I picked it up a few weeks ago, read a few chapters, and thought, do I want to read any more given that he requires the universe to be closed? Dark energy would seem to forbid a Big Crunch and render even the early parts of his model moot.
Sweet! Wikipedia’s image for Physical Cosmology, including your Dark Energy link, is the cosmic microwave background map from the WMAP mission. That was the first mission I worked with NASA. My job, as junior-underling attitude control engineer, was to come up with some way to salvage the medium cost, medium-risk mission if a certain part failed, and to help babysit the spacecraft during the least fun midnight-to-noon shift. Still, it feels good to have been a tiny part of something that has made a difference in how we understand our universe.
Disclaimer: My unofficial opinions, not NASA’s. Blah, blah, blah.
I think you duplicated my post.
So I did. Context in Recent Comments unfortunately only reaches so far.
How does he get from there to Christianity in particular?
If you are assuming infinite computronium you may as well go ahead and assume simulation of all of the conceivable religions!
I suppose that leaves you in a position of Pascal’s Gang Mugging.
That’s basically Hindu theology in a nutshell. Or more accurately, Pascal’s Gang Maybe Mugging Maybe Hugging.
If you assume a Tegmark multiverse — that all definable entities actually exist — then it seems to follow that:
All malicious deprivation — some mind recognizing another mind’s definable possible pleasure, and taking steps to deny that mind’s pleasure — implies the actual existence of the pleasure it is intended to deprive;
All benevolent relief — some mind recognizing another mind’s definable possible suffering, and taking steps to alleviate that suffering — implies the actual existence of the suffering it is intended to relieve.
It does not follow from the fact that I am motivated to prevent certain kinds of suffering/pleasure, that said suffering/pleasure is “definable” in the sense I think you mean it here. That is, my brain is sufficiently screwy that it’s possible for me to want to prevent something that isn’t actually logically possible in the first place.
Since religions are human inventions, I would guess that any comprehensive simulation program already produces all conceivable religions.
But I’m guessing that you meant to talk about the simulation of all conceivable gods. That is another matter entirely. Even with unlimited computronium, you can only simulate possible gods—gods not entailing any logical contradictions. There may not be any such gods.
This doesn’t affect Tipler’s argument though. Tipler does not postulate God as simulated. Tipler postulates God as the simulator.
I’m not sure. I only read the first book—“Physics of Immortality”. But I would suppose that he doesn’t actually try to prove the truth of Christianity—he might be satisfied to simply make Christian doctrine seem less weird and impossible.
Here’s a direct comparison of the two that I made.
There’s a buttload of thinking that’s been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don’t think it is.
(For any discredited theory along the lines of gods or astrology, you want to focus on its advocates from the past more than from the present, because the past is when the world’s best minds were unironically into these things.)
Theres also the opportunity for a kind of metatheology- which might lead to some really interesting insights into humans and how they relate to the world.
Tangentially, it’s important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say. (Christ more than His disciples, Buddha more than Zen practitioners, Freud and Jung more than their followers, et cetera.) Many people who are now considered brilliant/inspiring had something legitimately interesting to say. History is a decent filter for intellectual quality.
That said, everything you’d ever need to know is covered by a combination of Terence McKenna and Gautama Buddha. ;)
This doesn’t follow. The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn’t have any obvious correlation with intelligence.
I’d also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.
You’re mostly right; upvoted. I suppose I was thinking primarily of Buddhism, which was pretty damn exceptional in this regard. Buddha was ridiculously prodigious. There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I’m not aware of them. Actually, if anyone has links to interesting writing from smart non-Sufi Muslims, I’d be interested.
This kind of depends on criteria for success. If number of adherents is what matters then I agree, if correctness is what matters then it’s probably a very similar skill set. Look at what postmodernists would probably call Eliezer’s Singularity subreligion, for instance.
There’s a serious problem with this in Christianity in that you have to figure out what the founder actual said in the first place, which is very much an open problem concerning Christianity (and perhaps Bhuddism as well but I am less familiar with it at the moment).
For example, just this century with the rediscovery of the Gospel of Thomas you get a whole new set of information which is .. challenging to integrate to say the least, and also very interesting.
About half of the sayings are different (usually earlier, better) versions of stuff already in the synoptics, but there are some new gems—check out 22:
Or 108:
Those are certainly things that weren’t in the bible before that people would have put a lot of work into interpreting if they had been, but “gems” is not the word I’d use.
Point taken. I was thinking of number of adherents.
Also I should note that by ‘intelligence’ I mostly meant ‘predisposition to say insightful or truthful things’, which is rather different from g.
Just be careful of true believers that may condemn you for heresy for using the other tribe’s jargon! ;)
‘Worship’ or ‘Elder Rituals’ could not be reasonably construed as a relevant reply to your thread.
Eliezer is trying to define theism to mean religion, I think, so that atheism is still a defensible state of belief. I guess I’m okay with this, but it makes me sad to lose what I saw as a perfectly good word.
Strongly agree. Better to avoid synonyms when possible. ‘Simulationism’ is ugly and doesn’t seem sufficiently general in the way ‘theism’ does.
I know one isn’t supposed to use web comics to argue a point, but I’ve always found SMBC is the exception to that rule. Maybe not always to get the point across so much as to lighten the mood.
When I want to discuss something, I use a relevant SMBC comic to get people to locate the thing I am talking about. I say decision theory ethics, people glaze over. I link this and they get it immediately.
Not relevant: when people want to use god-particles, etc, to justify belief in God, I use this. It is significantly more effective than any argument I’ve employed.
Yes. Next. I think this post demonstrates the need for downvotes to be a a greater than 1.0 multiple of upvotes. What argument is there otherwise other than the status quo?
To the extent that positive karma is a reward for the poster and an indication of what people desire to see (both very true), we should not expect a distribution about the mean of zero. If the average comment is desirable and deserving of reward, then the average comment will be upvoted.
I didn’t say anything about centering on zero, and agree that would be incorrect. However, modification to the current method is likely challenging and no one’s actually going to do any novel karma engineering here so it was a silly comment for me to make.
[Deleted: Gods “run an intrinsically infinitary inference system”.] ETA: agreed, silly.
is summarily rejected. What does ‘intrinsically infinitary’ even mean?
For example, outside the domain of Goedel’s theorems.