If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal’s Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don’t have beliefs in the sense that LW uses the word. People just say words, mostly words that they’ve heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it’s a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don’t ask “what do these people believe?” but “what do these people do?” The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
I believe that there are plenty of statistics that shows how suddenly acquiring a large sum of money on the long term doesn’t make you a) richer; b) happier. Of course, to everyone I say this, I hear the reply “I would know how to make myself happy”, but obviously this can’t be true for everyone. In this case, I prefer to believe to be the average guy...
I think the current consensus is that in fact having more money does make you happier.[1] As for richer, I can look at how I’ve lived in the past and observe that I’ve been pretty effective at being frugal and not spending money just because I have it. Of course it’s possible that a sudden cash infusion 10 years ago would have broken all those good habits, but I don’t see any obvious reason to believe it.
[1] See e.g. this (though I’d be a bit cautious given where it’s being reported) and the underlying research.
As I said, this is the standard answer I get, albeit a little bit more sophisticated than the average. Unless you’re already rich and still having good saving habits, I see a very obvious reason why you should have broken those habits: you suddenly don’t need to save anymore. All the motivational structure you have in place to save suddenly lose meaning. Anyway, I don’t trust myself that much in the long run.
I am not aware of any valid inference from “I hear this often” to “this is wrong” :-).
Unless you’re already rich and still having good saving habits, I see a very obvious reason why you should have broken those habits: you suddenly don’t need to save anymore.
I suppose it depends on what you mean by “rich” and “need”. I don’t feel much like giving out the details of my personal finances here just to satisfy Some Guy On The Internet that I don’t fit his stereotypes, so I’ll just say that my family’s spending habits haven’t changed much (and our saving has accordingly increased in line with income) over ~ 20 years in which our income has increased substantially and our wealth (not unusually, since wealth can be zero or negative) has increased by orders of magnitude. On the other hand, I’m not retired just yet and trying to retire at this point would be uncomfortable, though probably not impossible.
So, sure, it’s possible that a more sudden and larger change might have screwed me up in various ways. But, I repeat, I see no actual evidence for that, and enough evidence that my spending habits are atypical of the population that general factoids of the form “suddenly acquiring a lot of money doesn’t make you richer in the long run” aren’t obviously applicable. (Remark: among those who suddenly acquire a lot of money, I suspect that frequent lottery players are heavily overrepresented. So it’s not even the general population that’s relevant here, but a population skewed towards financial incompetence.)
don’t feel much like giving out the details of my personal finances here just to satisfy Some Guy On The Internet that I don’t fit his stereotypes
I agree that you shouldn’t, I’ll just say that indeed you do fit the stereotype.
So, sure, it’s possible that a more sudden and larger change might have screwed me up in various ways. But, I repeat, I see no actual evidence for that
I think I’ve traced the source of disagreement, let me know if you agree on this analysis. It’s a neat exercise in tracking priors. You think that your saving ratio is constant as a function of the derivative of your income, while I think that there are breakdown threshold at large value of the derivative. The disagreement then is about the probability of a breakdown threshold. I, using the outside view, say “according to this statistics, normal people have a (say) 0.8 probability of a breakdown, so you have the same probability”; you, using the inside view, say “using my model of my mind, I say that the extension of the linear model in the far region is still reliable”. The disagreement then transfers to “how well one can know its own mind or motivational structure”, that is “if I say something about my mind, what is the probability that it is true?” I don’t know your opinion on this, but I guess it’s high, correct? In my case, it’s low (NB: it’s low for myself). From this descend all the opinions that we have expressed!
Remark: among those who suddenly acquire a lot of money, I suspect that frequent lottery players are heavily overrepresented. So it’s not even the general population that’s relevant here, but a population skewed towards financial incompetence.
Well, famous-then-forgotten celebrities (in any field: sports, music, movies, etc.) fit the category, but I don’t know how much influence that has. Anyway, I have the feeling that financial competence is a rare thing to have in the general population, so even if the prior is skewed towards incompetence, that is not much of an effect.
I’ll just say that indeed you do fit the stereotype.
Just for information: Are you deliberately trying to be unpleasant?
You think that your saving ratio is constant as a function of the derivative of your income
First of all, a terminological question: when you say “the derivative of your income” do you actually mean “your income within a short period”? -- i.e., the derivative w.r.t. time of “total income so far” or something of the kind? It sounds as if you do, and I’ll assume that’s what you mean in what follows.
So, anyway, I’m not quite sure whether you’re trying to describe my opinions about (1) the population at large and/or (2) me in particular. My opinion about #1 is that most people spend almost all that their income; maybe their savings:income ratio is approximately constant, or maybe it’s nearer the truth to say that their savings in absolute terms are constant, or maybe something else. But the relevant point (I think) is that most people are, roughly, in the habit of spending until they start running out of money. My opinion about #2 (for which I have pretty good evidence) is that, at least within the range of income I’ve experienced to date, my spending is approximately constant in absolute terms and doesn’t go up much with increasing income or increasing wealth. In particular, I have strong evidence that (1) many people basically execute the algorithm “while I have money: spend some” and (2) I don’t.
(I should maybe add that I don’t think this indicates any particular virtue or brilliance on my part, though of course it’s possible that my undoubted virtue and brilliance are factors. It’s more that most of the things I like to do are fairly cheap, and that I’m strongly motivated to reduce the risk of Bad Things in the future like running out of money.)
I think that there are breakdown threshold at large value of the derivative
Always possible (for people in general, for people-like-me, for me-in-particular). Though, at the risk of repeating myself, I think the failure of sudden influxes of money to make people richer in the long term is probably more a matter of executing that “spend until you run out” algorithm. Do you know whether any of the research on this stuff resolves that question?
I, using the outside view, [...]; you, using the inside view, [...]
I try to use both, and so far as I can tell I’m using both here. I’m not just looking at my model of the insides of my mind and saying “I can see I wouldn’t do anything so stupid” (I certainly don’t trust my introspection that much); so far as I can tell, I would make the same predictions about anyone else with a financial history resembling mine.
Now, for sure, I could be badly wrong. I might be fooling myself when I say I’m judging my likely behaviour in this hypothetical situation on the basis of my (somewhat visible externally) track record, rather than my introspective confidence in my mental processes. I might be wrong about how much evidence that track record is. I might be wrong in my model of why so many people end up in money trouble even if they suddenly acquire a pile of money; maybe it’s a matter of those “breakpoints” rather than of a habit of spending until one runs out. Etc. So I’m certainly not saying I know that me-10-years-ago would have been helped rather than harmed by a sudden windfall. Only that, so far as I can tell, I most likely would have been.
even if the prior is skewed towards incompetence, that is not much of an effect.
I suggest that people who play the lottery a lot are probably, on balance, exceptionally incompetent, and that those people are probably overrepresented among windfall recipients.
I had a quick look for more information about the effects of suddenly getting money.
This article on Yahoo!!!!! Finance describes a study showing that lottery winners are more likely to end up bankrupt if they win more. That seems to fit with my theory that big lottery wins are correlated with buying a lot of lottery tickets, hence with incompetence. It quotes another study saying that people spend more money on lottery tickets if they’re invited to do it in small increments (which is maybe very weak evidence against the “breakpoint” theory, which has the size-of-delta → tendency-to-spend relationship going the other way—except that the quantities involved here are tiny). And it speculates (without citing any sort of evidence) that what’s going on with bankrupt lottery winners is that they keep spending until they run out, which is also my model.
This paper (PDF) finds that people in Germany are more likely to become entrepreneurs if they have made “windfall gains” (inheritance, donations, lottery winnings, payments from things like life insurance), suggesting that at least some people do more productive things with windfalls than just spend them all.
This paper [EDITED to add: ungated version]looks at an unusual lottery in 1832, and according to ]this blog post finds that on balance winners did better, with those who were already better off improving and those who were worse off being largely unaffected.
[EDITED to add more information about the 1832 lottery now that I can actually read the paper:] Some extracts from the paper: “Participation was nearly universal” (so, maybe, no selection-for-incompetence effect); “The prize in this lottery was a claim on a parcel of land” (so, different from lotteries with monetary prizes); “lottery losers look similar to lottery winners in a series of placebo checks” (so, again, maybe no selection for incompetence); “the poorest third of lottery winners were essentially as poor as the poorest third of lottery winners” (so the wins didn’t help the poorest, but don’t seem to have harmed them either).
Just for information: Are you deliberately trying to be unpleasant?
No, even though I speculated that the sentence you’re answering could have been interpreted that way. Just to be clear, the stereotype here is “People who, when said that the general population usually end up bankrupt after a big lottery win, says ‘I won’t, I know how to save’”. Now I ask you: do you think you don’t fit the stereotype?
Anyway, now I have a clearer picture of your model: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population. You believe that people execute the same algorithm regardless of the amount of money it is applied to. So your point is not
“I (probably) don’t have breakdown threshold”
but
“I (probably) don’t execute a bad financial algorithm”
That clarifies some more things.
Besides, I’m a little sad that you didn’t answered to the main point, which was “How well do you think you know the inside mechanism of your mind?”
That seems to fit with my theory that big lottery wins are correlated with buying a lot of lottery tickets, hence with incompetence.
That would be bad bayesian probability. The correct way to treat it is “That seems to fit with my theory better than your theory”. Do you think it does? Or that it supports my theory equally well?
I’m asking it because at the moment I’m behind my firm firewall and cannot access those links, if you care to discuss it further I could comment this evening.
I’ll just add that I have the impression that you’re taking things a little bit too personally, I don’t know why you care to such a degree, but pinpointing the exact source of disagreement seems to be a very good exercise in bayesian rationality, we could even promote it to a proper discussion post.
If that’s your definition of “the stereotype” then I approximately fit (though I wouldn’t quite paraphrase my response as “I know how to save”; it’s a matter of preferences as much as of knowledge, and about spending as much as about saving).
The stereotype I was suggesting I may not fit is that of “people who, in fact, if they got a sudden windfall would blow it all and end up no better off”.
now I have a clearer picture of your model
Except that you are (not, I think, for the first time) assuming my model is simpler than it actually is. I don’t claim that there are “no threshold phenomena whatsoever”. I think it’s possible that there are some. I don’t know just what (if any) there are, for me or for others. (My model is probabilistic.) I have not, looking back at my own financial behaviour, observed any dramatic threshold phenomena; it is of course possible that there are some but I don’t see good grounds for thinking there are.
the main point, which was “How well do you think you know the inside mechanism of your mind”
As I already said, my predictions about the behaviour of hypothetical-me are not based on thinking I know the inside mechanism of my mind well, so I’m not sure why that should be the main point. I did, however, say ‴I’m not just looking at my model of the insides of my mind and saying “I can see I wouldn’t do anything so stupid” (I certainly don’t trust my introspection that much)‴. I’m sorry if I made you sad, but I don’t quite understand how I did.
That would be bad bayesian probability.
No. It would be bad bayesian probability if I’d said “That seems to fit with my theory; therefore yours is wrong”. I did not say that. I wasn’t trying to make this a fight between your theory and mine; I was trying to assess my theory. I’m not even sure what your theory about lottery tickets, as such, is. I think the fact that people with larger lottery wins ended up bankrupt more often than people with smaller ones is probably about equally good evidence for “lottery winners tend to be heavy lottery players, who tend to be particularly incompetent” as for “there’s a threshold effect whereby gains above a certain size cause particularly stupid behaviour”.
you’re taking things a little bit too personally [...] I don’t know why you care to such a degree
Well, from where I’m sitting it looks as if we’ve had multiple iterations of the following pattern:
you tell me, with what seems to me like excessive confidence, that I probably have Bad Characteristic X because most people have Bad Characteristic X
I give you what seems to me to be evidence that I don’t
you reiterate your opinion that I probably have Bad Characteristic X because most people do.
The particular values of X we’ve had include
financial incompetence;
predicting one’s own behaviour by naive introspection and neglecting the outside view;
overconfidence in one’s predictions.
In each case, for the avoidance of doubt, I agree that most people have Bad Characteristic X, and I agree that in the absence of other information it’s reasonable to guess that any given person probably has it too. However, it seems to me that
telling someone they probably have X is kinda rude, though possibly justified (not always; consider, e.g., the case where X is “not knowing anything about cognitive biases” and all you know about the person you’re talking to is that they’re a longstanding LW contributor)
continuing to do so when they’ve given you what they consider to be contrary evidence, and not offering a good counterargument, is very rude and probably severely unjustified.
So you’ve made a number of personal claims about me, albeit probabilistic ones; they have all been negative ones; they have all (as it seems to me) been under-supported by evidence; when I have offered contrary evidence you have largely ignored it.
It also seems to me that on each occasion when you’ve attempted to describe my own position, what you have actually described is a simplified version which happens to be clearly inferior to my actual position as I’ve tried to state it. For instance:
I say: I see … enough evidence that my spending habits are atypical of the population. You say: you, using the inside view, say “using my model of my mind, I say that the extension of the linear model in the far region is still reliable”.
I say, in response to your hypothesis about breakpoints: Always possible (for people in general, for people-like-me, for me-in-particular). and: I might be wrong in my model …; maybe it’s a matter of those “breakpoints” rather than of a habit of spending until one runs out. You say: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population.
So. From where I’m sitting, it looks as if you have made a bunch of negative claims about my thinking, not updated in any way in the face of disagreement (and, where appropriate, contrary evidence), and repeatedly offered purported summaries of my opinions that don’t match what I’ve actually said and are clearly stupider than what I’ve actually said.
Now, of course the negative claims began with statistical negative claims about the population at large, and I agree with those claims. But the starting point was my statement that “I would do X” and you chose to continue applying those negative claims to me personally.
I would much prefer a less personalized discussion. I do not enjoy defending myself; it feels too much like boasting.
[EDITED to fix a formatting screwup.]
[EDITED to add: Hi, downvoter! Would you care to tell me what you didn’t like so that I can, if appropriate, do less of it in future? Thanks.]
In the form of religious stories or perhaps advice from a religious leader. I should’ve been more specific than “life situations”: my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.
Finding an appropriate cached procedure for grief for atheists may not be a bad idea. Right after a family death, say, is a bad time to have to work out how you should react to a loved one being suddenly gone forever.
Of course that is not necessarily winning, insofar as it promotes failing to take responsibility for working out solutions that are well fitted to your particular situation (and the associated failure mode where if you can’t find a cached entry at all then you just revert to form and either act helpless or act out.). The best I’m willing to regard that as is ‘maintaining the status quo’ (as with having a lifejacket vs. being able to swim)
I would regard it as unambiguously winning if they had a good database AND succeeded at getting people to take responsibility for developing real problem solving skills. (I think the database would have to be much smaller in this case—consider something like the GROW Blue Book as an example of such a reasonably-sized database, but note that GROW groups are much smaller (max 15 people) than church congregations)
Although, isn’t the question not about difficulty, but about whether you really believe you should have, and deserve to have, a good life? I mean, if the responsibility is yours, then it’s yours, no matter whether it’s the responsibility to move a wheelbarrow full of pebbles or to move every stone in the Pyramids. And your life can’t really genuinely improve until you accept that responsibility, no matter what hell you have to go through to become such a person, and no matter how comfortable/‘workable’ your current situation may seem.
(of course, there’s a separate argument to be made here, that ‘people don’t really believe they should have, or deserve to have a good life’. And I would agree that 99% or more don’t. But I think believing in people’s need and ability to take responsibility for their life, is part of believing that they can HAVE a good life, or that they are even worthwhile at all.)
In case this seems like it’s wandered off topic, the general problem of religion I’m trying to point at is ‘disabling help’: Having solutions and support too readily/abundantly available discourages people from owning their own life and their own problems, developing skills that are necessary to a good life. They probably won’t become great at thinking, but they could become betterif , and only if, circumstances pressed them to.
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity,
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there’s an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn’t move your prior odds all that much.
and that at least personally I have radically changed my worldview a whole bunch of times,
This seems irrelevant to the truth of Christianity.
then it seems like I should assign at least a 5% or so probability to Christianity being true.
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity.
Of course, there are also perspective-relative “highly probable” alternate explanations than sound reasoning for non-Christians’ belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that “there are no transhumanly intelligent entities in our environment” would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also “human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency” would be a useful idea for (Christian-)hypothetical demons to promote.
Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it’s in fact quite demonstrable that no such conspiracy could have existed; but then, it’s hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. “The concept of ‘evidence’ had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at ‘one level higher than you’.” — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.
consider that “there are no transhumanly intelligent entities in our environment” would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote
How do you account for the other two thirds of people who don’t believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can’t use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal’s wager but the biggest to me is it’s impossible to choose WHICH pascal’s wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it’s belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of “christianity” being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.
And more worryingly, with the Christians I have spoken to, those who are more consistent in their beliefs and actually update the rest of their beliefs on them (and don’t just have “Christianity” as a little disconnected bubble in their beliefs) are overwhelmingly in this category, and those who believe that most Christians will go to heaven usually haven’t thought very hard about the issue.
C.S. Lewis thought most everyone was going to Heaven and thought very hard about the issue. (The Great Divorce is brief, engagingly written, an allegory of nearlyuniversalism, and a nice typology of some sins).
I would also add that there are Christian’s who beleive that everyone goes to heaven, even atheists.
I spoke with a protestant theology student in Berlin who assured me that the belief is quite popular among his fellow students.
He also had no spirtiual experiences whatsoever ;)
Well, correct me if I’m wrong, but most of the other popular religions don’t really believe in eternal paradise/damnation, so Pascal’s Wager applies just as much to, say, Christianity vs. Hinduism as it does Christianity vs. atheism. Jews, Buddhists, and Hindus don’t believe in hell, but as far as I can tell. Muslims do. So if I were going to buy into Pascal’s wager, I think I would read apologetics of both Christianity and Islam, figure out which one seemed more likely, and going with that one. Even if you found equal probability estimates for both, flipping a coin and picking one would still be better than going with atheism, right?
The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true,
Why? Couldn’t it be something like, Religion A is correct, Religion B almost gets it and is getting at the same essential truth, but is wrong in a few ways, Religion C is an outdated version of Religion A that failed to update on new information, Religion D is an altered imitation of Religion A that only exists for political reasons, etc.
Good post though, and you sort of half-convinced me that there are flaws in Pascal’s Wager, but I’m still not so sure.
You’re combining two reasons for believing: Pascal’s Wager, and popularity (that many people already believe). That way, you try to avoid a pure Pascal’s Mugging, but if the mugger can claim to have successfully mugged many people in the past, then you’ll submit to the mugging. You’ll believe in a religion if it has Heaven and Hell in it, but only if it’s also popular enough.
You’re updating on the evidence that many people believe in a religion, but it’s unclear what it’s evidence for. How did most people come to believe in their religion? They can’t have followed your decision procedure, because it only tells you to believe in popular religions, and every religion historically started out small and unpopular.
So for your argument to work, you must believe that the truth of a religion is a strong positive cause of people believing in it. (It can’t be overwhelmingly strong, though, since no religion has or has had a large majority of the world believing in it.)
But if people can somehow detect or deduce the truth of a religion on their own—and moreover, billions of people can do so (in the case of the biggest religions) - then you should be able to do so as well.
Therefore I suggest you try to decide on the truth of a religion directly, the way those other people did. Pascal’s Wager can at most bias you in favour of religions with Hell in them, but you still need some unrelated evidence for their truth, or else you fall prey to Pascal’s Mugging.
Even if you limit yourself to eternal damnation promising religions, you still need to decide which brand of Christianity/Islam is true.
If religion A is true, that implies that religion A’s god exists and acts in a way consistent with the tenets of that religion. This implies that all of humanity should have strong and very believable evidence for Religion A over all other religions. But we have a large amount of religions that describe god and gods acting in very different ways. This is either evidence that all the religions are relatively false, that god is inconsistent, or that we have multiple gods who are of course free to contradict one another. There’s a lot of evidence that religions sprout from other religions and you could semi-plausibly argue that there is a proto-religion that all modern ones are versions or corruptions of, but this doesn’t actually work to select Christianity, because we have strong evidence that many religions predate Christianity, including some of which that it appears to have borrowed myths from.
Another problem with pascal’s wager: claims about eternal rewards or punishments are not as difficult to make as they would be to make plausible. Basically: any given string of words said by a person is not plausible evidence for infinite anything because it’s far more easy to SAY infinity than to provide any other kind of evidence. This means you can’t afford to multiply utility by infinity because at any point someone can make any claim involving infinity and fuck up all your math.
Jews, Buddhists, and Hindus don’t believe in hell, but as far as I can tell.
I can’t speak for the other ones, but Buddhists at least don’t have a “hell” that non-believers go to when they die because Buddhists already believe that life is an eternal cycle of infinite suffering, that can only be escaped by following the tenants of their religion. Thus, rather then going to hell, non-believers just get reincarnated back into our current world, which Buddhism sees as being like unto hell.
I was thinking about the Christian emphasis on forgiveness, but the Orthodox Jewish idea of having a high proportion of one’s life affected by religious rules would also count.
Judging something as ‘good’ depends on your ethical framework. What framework do you have in mind when you ask if any religions offer good advice? After all, every religion offers good advice according to its own ethics.
Going by broadly humanistic, atheistic ethics, what is good about having a high proportion of one’s life be affected by religious rules? (Whether the Orthodox Jewish rules, or in general.)
If the higher power cared, don’t you think such power would advertise more effectively? Religious wars seem like pointless suffering if any sufficient spiritual belief saves the soul.
If the higher power cared about your well being, it would just “save” everyone regardless of belief or other attributes. It would also intervene to create heaven on earth and populate the whole universe with happy people.
Remember that the phrase “save your soul” refers to saving it from the eternal torture visited by that higher power.
I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.
There’s a bit of a problem with the claim that nobody knows what’s what: the usual procedure when someone lacks knowledge is to assign an ignorance prior. The standard methods for generating ignorance priors, usually some formulation of Occam’s razor, assign very low probability to claims as complex as common religions.
People being religious is some evidence that religion is true. Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don’t think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization’s recommendations?
I don’t think anyone thinks a Catholic priest can turn wine into blood on command.
Neither do Catholics think their priests turn wine into actual blood. After all, they’re able to see and taste it as wine afterwards! Instead they’re dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don’t come true, and so it’s more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
I really don’t think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.
Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
I don’t think it’s fair to say that no one of the practical predictions of religion holds up to rigorous examination.
In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get’s really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Alcoholics Anonymous is famously ineffective, but separate from that: What’s your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not ‘Does religion have any worth’ but ’Should I become a practicing christian to avoid burning in hell for eternity”
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox’s roommate’s death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun. In fact, there are some hypotheses for which belief is evidence against. For instance, if someone believes in a conspiracy theory, that’s evidence against the conspiracy theory; in a world in which a set of events X occurs, but no conspiracy is behind it, people would be free to develop conspiracy theories regarding X. But in a world in which X occurs, and a conspiracy is behind it, it likely that the conspiracy will interfere with the formation of any conspiracy theory.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun.
Bad example. In fact, the example you give is sufficient to require that your contention be modified (or rejected as is).
While it is not the case that there is a tea kettle orbiting the sun (except on earth) there is a mechanism by which people can assign various degrees of probability to that hypothesis, including probabilities high enough to constitute ‘belief’. This is the case even if the existence of such a kettle is assumed to have not caused the kettle belief. Instead, if observations about how physics works and our apparent place within it were such that kettles are highly likely to exist orbiting suns like ours then I would believe that there is a kettle orbiting the sun.
It so happens that it is crazy to believe in space kettles that we haven’t seen. This isn’t because we haven’t seen them—we wouldn’t expect to see them either way. This is because they (probably) don’t exist (based on all our observations of physics). If our experiments suggested a different (perhaps less reducible) physics then it would be correct to believe in space kettles despite there being no way for the space kettle to have caused the belief.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Yes, but this is different from a generic “People being religious is some evidence that religion is true.”
P(religion is true | overwhelming professing of belief) > P(religion is true | absence of overwhelming professing of belief).
In other words, I think my two formulations are isomorphic. If we define evidence such that absence of evidence is evidence of absence, then one implication is that it is possible for some evidence to exist in favor of false propositions.
it is possible for some evidence to exist in favor of false propositions.
This is possible with any definition of evidence. Every bit of information you receive makes you discard some theories which have been disproven, so it’s evidence in favour of each of the ones you don’t discard. But only one of those is fully true; the others are false.
I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions.
For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi.
Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery.
There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver.
The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb:
“It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
Qiaochu_Yuan has it right—the vast majority of Christians do not constitute additional evidence.
Moreover, the Bible (Jewish, Catholic, or Protestant) describes God as an abusive jerk. Everything we know about abusive jerks says you should get as far away from him as possible. Remember that ‘something like the God of the Bible exists’ is a simpler hypothesis than Pascal’s Christianity, and in fact is true in most multiverse theories. (I hate that name, by the way. Can’t we replace it with ‘macrocosm’?)
More generally, if for some odd reason you find yourself entertaining the idea of miraculous powers, you need to compare at least two hypotheses:
*Reality allows these powers to exist, AND they already exist, AND your actions can affect whether these powers send you to Heaven or Hell (where “Heaven” is definitely better and not at all like spending eternity with a human-like sadist capable of creating Hell), AND faith in a God such as humans have imagined will send you to Heaven, AND lack of this already-pretty-specific faith will send you to Hell.
*Reality allows these powers to exist, AND humans can affect them somehow, AND religion would interfere with exploiting them effectively.
Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false? Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value? Does other people believing in Christianity indicate that they have knowledge that you don’t have?
Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false?
Yes.
Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value?
Yes.
Does other people believing in Christianity indicate that they have knowledge that you don’t have?
I agree completely. It’s impossible for me to imagine a scenario where a marginal believer is negative evidence in the belief—at best you can explain away the belief (“they’re just conforming” lets you approach 0 slope once it’s a majority religion w/ death penalty for apostates).
I have found this argument compelling, especially the portion about assigning a probability to the truth of Christian belief. Even if we have arguments that seem to demonstrate why it is that radically smart people believe a religion without recourse to there being good arguments for the religion, we haven’t explained why these people instead think there are good arguments. Sure, you don’t think they’re good arguments, but they do, and they’re rational agents as well.
You could say, “well they’re not rational agents, that was the criticism in the first place,” but we have the same problem that they do think they themselves are rational agents. What level do we have to approach that allows you to make a claim about how your methods for constructing probabilities trump theirs? The highest level is just, “you’re both human,” which makes valid the point that to some extent you should listen to the opinions of others. The next level “you’re both intelligent humans aimed at the production of true beliefs” is far stronger, and true in this case.
Where the Wager breaks down for me is that much more is required to demonstrate that if Christianity is true, God sends those who fail to produce Christian belief to Hell. Of course, this could be subject to the argument that many smart people also believe this corollary, but it remains true that it is an additional jump, and that many fewer Christians take it than who are simply Christians.
What takes the cake for me is asking what a good God would value. It’s a coy response for the atheist to say that a good God would understand the reasons one has for being an atheist, and that it’s his fault that the evidence doesn’t get there. The form of this argument works for me, with a nuance: Nobody is honest, and nobody deserves, as far as I can tell, any more or less pain in eternity for something so complex as forming the right belief about something so complicated. God must be able to uncrack the free will enigma and decide what’s truly important about people’s actions, and somehow it doesn’t seem that the relevant morality-stuff perfectly is perfectly predicted by religious affiliation. This doesn’t suggest that God might not have other good reasons to send people to Hell, but it seems hard to tease those out of yourself to a sufficient extent to start worrying beyond worrying about how much good you want to do in general. If God punishes for people not being good enough, the standard) method of reducing free will to remarkably low levels makes it hard to see what morality-stuff looks like. Whether or not it exists, you have the ability to change your actions by becoming more honest, more loving, and hence possibly more likely to be affiliated with the correct religion. But it seems horrible for God to make it a part of the game for you to be worrying about whether or not you go to Hell for reasons other than honesty or love. Worry about honesty and love, and don’t worry about where that leads.
In short, maybe Hell is one outcome of the decision game of life. But very likely God wrote it so that one’s acceptance of Pascal’s wager has no impact on the outcome. Sure, maybe one’s acceptance of Christianity does, but there’s nothing you can do about it, and if God is good, then this is also good.
People are not rational agents, and people do not believe in religions on the basis of “good arguments.” Most people are the same religion as their parents.
As often noted, most nonreligious parents have nonreligious children as well. Does that mean that people do not disbelieve religions on the basis of good arguments?
Your comment is subject to the same criticism we’re discussing. If any given issue has been raised, then some smart religious person is aware of it and believes anyway.
I think most people do not disbelieve religions on the basis of good arguments either. I’m most likely atheist because my parents are. The point is that you can’t treat majority beliefs as the aggregate beliefs of groups of rational agents. It doesn’t matter if for any random “good argument” some believer or nonbeliever has heard it and not been swayed, you should not expect the majority of people’s beliefs on things that do not directly impinge on their lives to be very reliable correlated with things other than the beliefs of those around them.
The above musings do not hinge on the ratio of people in a group believing things for the right reasons, only that some portion of them are.
Your consideration helps us assign probabilities for complex beliefs, but it doesn’t help us improve them. Upon discovering that your beliefs correlate with those of your parents, you can introduce uncertainty in your current assignments, but you go about improving them by thinking about good arguments. And only good arguments.
The thrust of the original comment here is that discovering which arguments are good is not straightforward. You can only go so deep into the threads of argumentation until you start scraping on your own bias and incapacities. Your logic is not magic, and neither are intuitions nor other’s beliefs. But all of them are heuristics that you can account when assigning probabilities. The very fact that others exist who are capable of digging as deep into the logic and being as skeptical of their intuitions, and who believe differently than you, is evidence that their opinion is correct. It matters little if every person of that opinion is as such, only that the best do. Because those are the only people you’re paying attention to.
[ETA: Retracted because I don’t have the aversion-defeating energy necessary to polish this, but:]
5% or so probability to Christianity being true
To clarify, presumably “true” here doesn’t mean all or even most of the claims of Christianity are true, just that there are some decision policies emphasized by Christianity that are plausible enough that Pascal’s wager can be justifiably applied to amplify their salience.
I can see two different groups of claims that both seem central to Christian moral (i.e. decision-policy-relevant) philosophy as I understand it, which in my mind I would keep separate if at all possible but that in Christian philosophy and dogma are very much mixed together:
The first group of claims is in some ways more practical and, to a LessWronger, more objectionable. It reasons from various allegedly supernatural phenomena to the conclusion that unless a human acts in a way seemingly concordant with the expressed preferences of the origins of those supernatural phenomena, that human will be risking some grave, essentially game theoretic consequence as well as some chance of being in moral error, even if the morality of the prescriptions isn’t subjectively verifiable. Moral error, that is, because disregarding the advice, threats, requests, policies &c. of agents seemingly vastly more intelligent than you is a failure mode, and furthermore it’s a failure mode that seemingly justifies retrospective condemnatory judgments of the form “you had all this evidence handed to you by a transhumanly intelligent entity and you chose to ignore it?” even if in some fundamental sense those judgments aren’t themselves “moral”. An important note: saying “supernaturalism is silly, therefore I don’t even have to accept the premises of that whole line of reasoning” runs into some serious Aumann problems, much more serious than can be casually cast aside, especially if you have a Pascalian argument ready to pounce.
The second group of claims is more philosophical and meta-ethical, and is emphasized more in intellectually advanced forms of Christianity, e.g. Scholasticism. One take on the main idea is that there is something like an eternal moral-esque standard etched into the laws of decision theoretic logic any deviations from which will result in pointless self-defeat. You will sometimes see it claimed that it isn’t that God is punishing you as such, it’s that you have knowingly chosen to distance yourself from the moral law and have thus brought ruin upon yourself. To some extent I think it’s merely a difference of framing born of Christianity’s attempts to gain resonance with different parts of default human psychology, i.e. something like third party game theoretic punishment-aversion/credit-seeking on one hand and first person decision theoretic regret-minimization on the other. [This branch needs a lot more fleshing out, but I’m too tired to continue.]
But note that in early Christian writings especially and in relatively modern Christian polemic, you’ll get a mess of moralism founded on insight into the nature of human psychology, theological speculation, supernatural evidence, appeals to intuitive Aumancy, et cetera. [Too tired to integrate this line of thought into the broader structure of my comment.]
If you take the outside view, and account for the fact that sixty-something percent of people don’t believe in Christianity, it seems like (assuming you just learned that fact) you should update (a bit) towards Christianity not being true.
If you did know the percentages already, they should be already integrated in your priors, together with everything else you know about the subject.
Note that the majority of numbers are not prime. But if you write a computer program (assuming you’re quite good at it) and it tells you 11 is prime, you should probably assign a high probability to it being prime, even though the program might have a bug.
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal’s Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don’t have beliefs in the sense that LW uses the word. People just say words, mostly words that they’ve heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it’s a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don’t ask “what do these people believe?” but “what do these people do?” The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
If there was a way to send a message to my 10 years ago former self, and I could only send a hundred of characters, that’s what I would send.
Why in particular?
The answer depends on how much personal you want me to get… Let’s just say I would have evalued some people a lot more accurately.
I’m obviously terribly shallow. I would send a bunch of sporting results / stock price data.
I believe that there are plenty of statistics that shows how suddenly acquiring a large sum of money on the long term doesn’t make you a) richer; b) happier. Of course, to everyone I say this, I hear the reply “I would know how to make myself happy”, but obviously this can’t be true for everyone. In this case, I prefer to believe to be the average guy...
I think the current consensus is that in fact having more money does make you happier.[1] As for richer, I can look at how I’ve lived in the past and observe that I’ve been pretty effective at being frugal and not spending money just because I have it. Of course it’s possible that a sudden cash infusion 10 years ago would have broken all those good habits, but I don’t see any obvious reason to believe it.
[1] See e.g. this (though I’d be a bit cautious given where it’s being reported) and the underlying research.
[EDITED to fix a formatting glitch.]
As I said, this is the standard answer I get, albeit a little bit more sophisticated than the average.
Unless you’re already rich and still having good saving habits, I see a very obvious reason why you should have broken those habits: you suddenly don’t need to save anymore. All the motivational structure you have in place to save suddenly lose meaning.
Anyway, I don’t trust myself that much in the long run.
I am not aware of any valid inference from “I hear this often” to “this is wrong” :-).
I suppose it depends on what you mean by “rich” and “need”. I don’t feel much like giving out the details of my personal finances here just to satisfy Some Guy On The Internet that I don’t fit his stereotypes, so I’ll just say that my family’s spending habits haven’t changed much (and our saving has accordingly increased in line with income) over ~ 20 years in which our income has increased substantially and our wealth (not unusually, since wealth can be zero or negative) has increased by orders of magnitude. On the other hand, I’m not retired just yet and trying to retire at this point would be uncomfortable, though probably not impossible.
So, sure, it’s possible that a more sudden and larger change might have screwed me up in various ways. But, I repeat, I see no actual evidence for that, and enough evidence that my spending habits are atypical of the population that general factoids of the form “suddenly acquiring a lot of money doesn’t make you richer in the long run” aren’t obviously applicable. (Remark: among those who suddenly acquire a lot of money, I suspect that frequent lottery players are heavily overrepresented. So it’s not even the general population that’s relevant here, but a population skewed towards financial incompetence.)
I agree that you shouldn’t, I’ll just say that indeed you do fit the stereotype.
I think I’ve traced the source of disagreement, let me know if you agree on this analysis.
It’s a neat exercise in tracking priors.
You think that your saving ratio is constant as a function of the derivative of your income, while I think that there are breakdown threshold at large value of the derivative. The disagreement then is about the probability of a breakdown threshold.
I, using the outside view, say “according to this statistics, normal people have a (say) 0.8 probability of a breakdown, so you have the same probability”; you, using the inside view, say “using my model of my mind, I say that the extension of the linear model in the far region is still reliable”.
The disagreement then transfers to “how well one can know its own mind or motivational structure”, that is “if I say something about my mind, what is the probability that it is true?”
I don’t know your opinion on this, but I guess it’s high, correct?
In my case, it’s low (NB: it’s low for myself). From this descend all the opinions that we have expressed!
Well, famous-then-forgotten celebrities (in any field: sports, music, movies, etc.) fit the category, but I don’t know how much influence that has. Anyway, I have the feeling that financial competence is a rare thing to have in the general population, so even if the prior is skewed towards incompetence, that is not much of an effect.
Just for information: Are you deliberately trying to be unpleasant?
First of all, a terminological question: when you say “the derivative of your income” do you actually mean “your income within a short period”? -- i.e., the derivative w.r.t. time of “total income so far” or something of the kind? It sounds as if you do, and I’ll assume that’s what you mean in what follows.
So, anyway, I’m not quite sure whether you’re trying to describe my opinions about (1) the population at large and/or (2) me in particular. My opinion about #1 is that most people spend almost all that their income; maybe their savings:income ratio is approximately constant, or maybe it’s nearer the truth to say that their savings in absolute terms are constant, or maybe something else. But the relevant point (I think) is that most people are, roughly, in the habit of spending until they start running out of money. My opinion about #2 (for which I have pretty good evidence) is that, at least within the range of income I’ve experienced to date, my spending is approximately constant in absolute terms and doesn’t go up much with increasing income or increasing wealth. In particular, I have strong evidence that (1) many people basically execute the algorithm “while I have money: spend some” and (2) I don’t.
(I should maybe add that I don’t think this indicates any particular virtue or brilliance on my part, though of course it’s possible that my undoubted virtue and brilliance are factors. It’s more that most of the things I like to do are fairly cheap, and that I’m strongly motivated to reduce the risk of Bad Things in the future like running out of money.)
Always possible (for people in general, for people-like-me, for me-in-particular). Though, at the risk of repeating myself, I think the failure of sudden influxes of money to make people richer in the long term is probably more a matter of executing that “spend until you run out” algorithm. Do you know whether any of the research on this stuff resolves that question?
I try to use both, and so far as I can tell I’m using both here. I’m not just looking at my model of the insides of my mind and saying “I can see I wouldn’t do anything so stupid” (I certainly don’t trust my introspection that much); so far as I can tell, I would make the same predictions about anyone else with a financial history resembling mine.
Now, for sure, I could be badly wrong. I might be fooling myself when I say I’m judging my likely behaviour in this hypothetical situation on the basis of my (somewhat visible externally) track record, rather than my introspective confidence in my mental processes. I might be wrong about how much evidence that track record is. I might be wrong in my model of why so many people end up in money trouble even if they suddenly acquire a pile of money; maybe it’s a matter of those “breakpoints” rather than of a habit of spending until one runs out. Etc. So I’m certainly not saying I know that me-10-years-ago would have been helped rather than harmed by a sudden windfall. Only that, so far as I can tell, I most likely would have been.
I suggest that people who play the lottery a lot are probably, on balance, exceptionally incompetent, and that those people are probably overrepresented among windfall recipients.
I had a quick look for more information about the effects of suddenly getting money.
This article on Yahoo!!!!! Finance describes a study showing that lottery winners are more likely to end up bankrupt if they win more. That seems to fit with my theory that big lottery wins are correlated with buying a lot of lottery tickets, hence with incompetence. It quotes another study saying that people spend more money on lottery tickets if they’re invited to do it in small increments (which is maybe very weak evidence against the “breakpoint” theory, which has the size-of-delta → tendency-to-spend relationship going the other way—except that the quantities involved here are tiny). And it speculates (without citing any sort of evidence) that what’s going on with bankrupt lottery winners is that they keep spending until they run out, which is also my model.
This paper (PDF) finds that people in Germany are more likely to become entrepreneurs if they have made “windfall gains” (inheritance, donations, lottery winnings, payments from things like life insurance), suggesting that at least some people do more productive things with windfalls than just spend them all.
This paper [EDITED to add: ungated version]looks at an unusual lottery in 1832, and according to ]this blog post finds that on balance winners did better, with those who were already better off improving and those who were worse off being largely unaffected.
[EDITED to add more information about the 1832 lottery now that I can actually read the paper:] Some extracts from the paper: “Participation was nearly universal” (so, maybe, no selection-for-incompetence effect); “The prize in this lottery was a claim on a parcel of land” (so, different from lotteries with monetary prizes); “lottery losers look similar to lottery winners in a series of placebo checks” (so, again, maybe no selection for incompetence); “the poorest third of lottery winners were essentially as poor as the poorest third of lottery winners” (so the wins didn’t help the poorest, but don’t seem to have harmed them either).
Make of all that what you will.
No, even though I speculated that the sentence you’re answering could have been interpreted that way. Just to be clear, the stereotype here is “People who, when said that the general population usually end up bankrupt after a big lottery win, says ‘I won’t, I know how to save’”. Now I ask you: do you think you don’t fit the stereotype?
Anyway, now I have a clearer picture of your model: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population. You believe that people execute the same algorithm regardless of the amount of money it is applied to. So your point is not “I (probably) don’t have breakdown threshold” but “I (probably) don’t execute a bad financial algorithm” That clarifies some more things. Besides, I’m a little sad that you didn’t answered to the main point, which was “How well do you think you know the inside mechanism of your mind?”
That would be bad bayesian probability. The correct way to treat it is “That seems to fit with my theory better than your theory”. Do you think it does? Or that it supports my theory equally well? I’m asking it because at the moment I’m behind my firm firewall and cannot access those links, if you care to discuss it further I could comment this evening.
I’ll just add that I have the impression that you’re taking things a little bit too personally, I don’t know why you care to such a degree, but pinpointing the exact source of disagreement seems to be a very good exercise in bayesian rationality, we could even promote it to a proper discussion post.
If that’s your definition of “the stereotype” then I approximately fit (though I wouldn’t quite paraphrase my response as “I know how to save”; it’s a matter of preferences as much as of knowledge, and about spending as much as about saving).
The stereotype I was suggesting I may not fit is that of “people who, in fact, if they got a sudden windfall would blow it all and end up no better off”.
Except that you are (not, I think, for the first time) assuming my model is simpler than it actually is. I don’t claim that there are “no threshold phenomena whatsoever”. I think it’s possible that there are some. I don’t know just what (if any) there are, for me or for others. (My model is probabilistic.) I have not, looking back at my own financial behaviour, observed any dramatic threshold phenomena; it is of course possible that there are some but I don’t see good grounds for thinking there are.
As I already said, my predictions about the behaviour of hypothetical-me are not based on thinking I know the inside mechanism of my mind well, so I’m not sure why that should be the main point. I did, however, say ‴I’m not just looking at my model of the insides of my mind and saying “I can see I wouldn’t do anything so stupid” (I certainly don’t trust my introspection that much)‴. I’m sorry if I made you sad, but I don’t quite understand how I did.
No. It would be bad bayesian probability if I’d said “That seems to fit with my theory; therefore yours is wrong”. I did not say that. I wasn’t trying to make this a fight between your theory and mine; I was trying to assess my theory. I’m not even sure what your theory about lottery tickets, as such, is. I think the fact that people with larger lottery wins ended up bankrupt more often than people with smaller ones is probably about equally good evidence for “lottery winners tend to be heavy lottery players, who tend to be particularly incompetent” as for “there’s a threshold effect whereby gains above a certain size cause particularly stupid behaviour”.
Well, from where I’m sitting it looks as if we’ve had multiple iterations of the following pattern:
you tell me, with what seems to me like excessive confidence, that I probably have Bad Characteristic X because most people have Bad Characteristic X
I give you what seems to me to be evidence that I don’t
you reiterate your opinion that I probably have Bad Characteristic X because most people do.
The particular values of X we’ve had include
financial incompetence;
predicting one’s own behaviour by naive introspection and neglecting the outside view;
overconfidence in one’s predictions.
In each case, for the avoidance of doubt, I agree that most people have Bad Characteristic X, and I agree that in the absence of other information it’s reasonable to guess that any given person probably has it too. However, it seems to me that
telling someone they probably have X is kinda rude, though possibly justified (not always; consider, e.g., the case where X is “not knowing anything about cognitive biases” and all you know about the person you’re talking to is that they’re a longstanding LW contributor)
continuing to do so when they’ve given you what they consider to be contrary evidence, and not offering a good counterargument, is very rude and probably severely unjustified.
So you’ve made a number of personal claims about me, albeit probabilistic ones; they have all been negative ones; they have all (as it seems to me) been under-supported by evidence; when I have offered contrary evidence you have largely ignored it.
It also seems to me that on each occasion when you’ve attempted to describe my own position, what you have actually described is a simplified version which happens to be clearly inferior to my actual position as I’ve tried to state it. For instance:
I say: I see … enough evidence that my spending habits are atypical of the population. You say: you, using the inside view, say “using my model of my mind, I say that the extension of the linear model in the far region is still reliable”.
I say, in response to your hypothesis about breakpoints: Always possible (for people in general, for people-like-me, for me-in-particular). and: I might be wrong in my model …; maybe it’s a matter of those “breakpoints” rather than of a habit of spending until one runs out. You say: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population.
So. From where I’m sitting, it looks as if you have made a bunch of negative claims about my thinking, not updated in any way in the face of disagreement (and, where appropriate, contrary evidence), and repeatedly offered purported summaries of my opinions that don’t match what I’ve actually said and are clearly stupider than what I’ve actually said.
Now, of course the negative claims began with statistical negative claims about the population at large, and I agree with those claims. But the starting point was my statement that “I would do X” and you chose to continue applying those negative claims to me personally.
I would much prefer a less personalized discussion. I do not enjoy defending myself; it feels too much like boasting.
[EDITED to fix a formatting screwup.]
[EDITED to add: Hi, downvoter! Would you care to tell me what you didn’t like so that I can, if appropriate, do less of it in future? Thanks.]
Hm?
In the form of religious stories or perhaps advice from a religious leader. I should’ve been more specific than “life situations”: my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.
Finding an appropriate cached procedure for grief for atheists may not be a bad idea. Right after a family death, say, is a bad time to have to work out how you should react to a loved one being suddenly gone forever.
Of course that is not necessarily winning, insofar as it promotes failing to take responsibility for working out solutions that are well fitted to your particular situation (and the associated failure mode where if you can’t find a cached entry at all then you just revert to form and either act helpless or act out.). The best I’m willing to regard that as is ‘maintaining the status quo’ (as with having a lifejacket vs. being able to swim)
I would regard it as unambiguously winning if they had a good database AND succeeded at getting people to take responsibility for developing real problem solving skills. (I think the database would have to be much smaller in this case—consider something like the GROW Blue Book as an example of such a reasonably-sized database, but note that GROW groups are much smaller (max 15 people) than church congregations)
I think you underestimate how difficult thinking is for most people.
That’s true (Dunning-Kruger effect etc.).
Although, isn’t the question not about difficulty, but about whether you really believe you should have, and deserve to have, a good life? I mean, if the responsibility is yours, then it’s yours, no matter whether it’s the responsibility to move a wheelbarrow full of pebbles or to move every stone in the Pyramids. And your life can’t really genuinely improve until you accept that responsibility, no matter what hell you have to go through to become such a person, and no matter how comfortable/‘workable’ your current situation may seem.
(of course, there’s a separate argument to be made here, that ‘people don’t really believe they should have, or deserve to have a good life’. And I would agree that 99% or more don’t. But I think believing in people’s need and ability to take responsibility for their life, is part of believing that they can HAVE a good life, or that they are even worthwhile at all.)
In case this seems like it’s wandered off topic, the general problem of religion I’m trying to point at is ‘disabling help’: Having solutions and support too readily/abundantly available discourages people from owning their own life and their own problems, developing skills that are necessary to a good life. They probably won’t become great at thinking, but they could become better if , and only if, circumstances pressed them to.
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there’s an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn’t move your prior odds all that much.
This seems irrelevant to the truth of Christianity.
That probability is way too high.
Of course, there are also perspective-relative “highly probable” alternate explanations than sound reasoning for non-Christians’ belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that “there are no transhumanly intelligent entities in our environment” would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also “human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency” would be a useful idea for (Christian-)hypothetical demons to promote.
Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it’s in fact quite demonstrable that no such conspiracy could have existed; but then, it’s hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. “The concept of ‘evidence’ had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at ‘one level higher than you’.” — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.
Why?
I agree that this doesn’t even make sense. If you’re super intelligent/powerful, you don’t need to hide. You can if you want, but …
Not an explanation, but: “The greatest trick the Devil ever pulled...”
http://en.wikipedia.org/wiki/List_of_religious_populations
How do you account for the other two thirds of people who don’t believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can’t use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal’s wager but the biggest to me is it’s impossible to choose WHICH pascal’s wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it’s belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of “christianity” being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.
There are also various Christian’s who believe that other Christian’s who follow Christianity the wrong way will go to hell.
I can’t upvote this point enough.
And more worryingly, with the Christians I have spoken to, those who are more consistent in their beliefs and actually update the rest of their beliefs on them (and don’t just have “Christianity” as a little disconnected bubble in their beliefs) are overwhelmingly in this category, and those who believe that most Christians will go to heaven usually haven’t thought very hard about the issue.
C.S. Lewis thought most everyone was going to Heaven and thought very hard about the issue. (The Great Divorce is brief, engagingly written, an allegory of nearly universalism, and a nice typology of some sins).
I would also add that there are Christian’s who beleive that everyone goes to heaven, even atheists. I spoke with a protestant theology student in Berlin who assured me that the belief is quite popular among his fellow students. He also had no spirtiual experiences whatsoever ;)
Then he’s going to be a prist in a few years.
Well, correct me if I’m wrong, but most of the other popular religions don’t really believe in eternal paradise/damnation, so Pascal’s Wager applies just as much to, say, Christianity vs. Hinduism as it does Christianity vs. atheism. Jews, Buddhists, and Hindus don’t believe in hell, but as far as I can tell. Muslims do. So if I were going to buy into Pascal’s wager, I think I would read apologetics of both Christianity and Islam, figure out which one seemed more likely, and going with that one. Even if you found equal probability estimates for both, flipping a coin and picking one would still be better than going with atheism, right?
Why? Couldn’t it be something like, Religion A is correct, Religion B almost gets it and is getting at the same essential truth, but is wrong in a few ways, Religion C is an outdated version of Religion A that failed to update on new information, Religion D is an altered imitation of Religion A that only exists for political reasons, etc.
Good post though, and you sort of half-convinced me that there are flaws in Pascal’s Wager, but I’m still not so sure.
You’re combining two reasons for believing: Pascal’s Wager, and popularity (that many people already believe). That way, you try to avoid a pure Pascal’s Mugging, but if the mugger can claim to have successfully mugged many people in the past, then you’ll submit to the mugging. You’ll believe in a religion if it has Heaven and Hell in it, but only if it’s also popular enough.
You’re updating on the evidence that many people believe in a religion, but it’s unclear what it’s evidence for. How did most people come to believe in their religion? They can’t have followed your decision procedure, because it only tells you to believe in popular religions, and every religion historically started out small and unpopular.
So for your argument to work, you must believe that the truth of a religion is a strong positive cause of people believing in it. (It can’t be overwhelmingly strong, though, since no religion has or has had a large majority of the world believing in it.)
But if people can somehow detect or deduce the truth of a religion on their own—and moreover, billions of people can do so (in the case of the biggest religions) - then you should be able to do so as well.
Therefore I suggest you try to decide on the truth of a religion directly, the way those other people did. Pascal’s Wager can at most bias you in favour of religions with Hell in them, but you still need some unrelated evidence for their truth, or else you fall prey to Pascal’s Mugging.
Even if you limit yourself to eternal damnation promising religions, you still need to decide which brand of Christianity/Islam is true.
If religion A is true, that implies that religion A’s god exists and acts in a way consistent with the tenets of that religion. This implies that all of humanity should have strong and very believable evidence for Religion A over all other religions. But we have a large amount of religions that describe god and gods acting in very different ways. This is either evidence that all the religions are relatively false, that god is inconsistent, or that we have multiple gods who are of course free to contradict one another. There’s a lot of evidence that religions sprout from other religions and you could semi-plausibly argue that there is a proto-religion that all modern ones are versions or corruptions of, but this doesn’t actually work to select Christianity, because we have strong evidence that many religions predate Christianity, including some of which that it appears to have borrowed myths from.
Another problem with pascal’s wager: claims about eternal rewards or punishments are not as difficult to make as they would be to make plausible. Basically: any given string of words said by a person is not plausible evidence for infinite anything because it’s far more easy to SAY infinity than to provide any other kind of evidence. This means you can’t afford to multiply utility by infinity because at any point someone can make any claim involving infinity and fuck up all your math.
I can’t speak for the other ones, but Buddhists at least don’t have a “hell” that non-believers go to when they die because Buddhists already believe that life is an eternal cycle of infinite suffering, that can only be escaped by following the tenants of their religion. Thus, rather then going to hell, non-believers just get reincarnated back into our current world, which Buddhism sees as being like unto hell.
To steelman it, what about a bet that believing in a higher power, no matter the flavor, saves your immortal soul from eternal damnation?
That is eerily similar to an Omega who deliberately favours specific decision theories instead of their results.
Just trying to see what form of the Pascal’s wager would avoid the strongest objections.
I don’t think this is just about the afterlife. Do any religions offer good but implausible advice about how to live?
What do you mean by ‘good but implausible’?
I was thinking about the Christian emphasis on forgiveness, but the Orthodox Jewish idea of having a high proportion of one’s life affected by religious rules would also count.
Judging something as ‘good’ depends on your ethical framework. What framework do you have in mind when you ask if any religions offer good advice? After all, every religion offers good advice according to its own ethics.
Going by broadly humanistic, atheistic ethics, what is good about having a high proportion of one’s life be affected by religious rules? (Whether the Orthodox Jewish rules, or in general.)
It may be worth something for people to have some low-hanging fruit for feeling as though they’re doing the right thing.
That sounds like a small factor compared to what the rules actually tell people to do.
If the higher power cared, don’t you think such power would advertise more effectively? Religious wars seem like pointless suffering if any sufficient spiritual belief saves the soul.
If the higher power cared about your well being, it would just “save” everyone regardless of belief or other attributes. It would also intervene to create heaven on earth and populate the whole universe with happy people.
Remember that the phrase “save your soul” refers to saving it from the eternal torture visited by that higher power.
I don’t think we disagree.
I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.
There’s a bit of a problem with the claim that nobody knows what’s what: the usual procedure when someone lacks knowledge is to assign an ignorance prior. The standard methods for generating ignorance priors, usually some formulation of Occam’s razor, assign very low probability to claims as complex as common religions.
People being religious is some evidence that religion is true. Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don’t think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization’s recommendations?
Neither do Catholics think their priests turn wine into actual blood. After all, they’re able to see and taste it as wine afterwards! Instead they’re dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don’t come true, and so it’s more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
I really don’t think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.
This is probably true, but the discussion was about religion (i.e. official dogma) making predictions. Lots of holes can be picked in that, of course.
I don’t think it’s fair to say that no one of the practical predictions of religion holds up to rigorous examination. In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get’s really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Alcoholics Anonymous is famously ineffective, but separate from that: What’s your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not ‘Does religion have any worth’ but ’Should I become a practicing christian to avoid burning in hell for eternity”
To me it is only evidence that people are irrational.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox’s roommate’s death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun. In fact, there are some hypotheses for which belief is evidence against. For instance, if someone believes in a conspiracy theory, that’s evidence against the conspiracy theory; in a world in which a set of events X occurs, but no conspiracy is behind it, people would be free to develop conspiracy theories regarding X. But in a world in which X occurs, and a conspiracy is behind it, it likely that the conspiracy will interfere with the formation of any conspiracy theory.
Bad example. In fact, the example you give is sufficient to require that your contention be modified (or rejected as is).
While it is not the case that there is a tea kettle orbiting the sun (except on earth) there is a mechanism by which people can assign various degrees of probability to that hypothesis, including probabilities high enough to constitute ‘belief’. This is the case even if the existence of such a kettle is assumed to have not caused the kettle belief. Instead, if observations about how physics works and our apparent place within it were such that kettles are highly likely to exist orbiting suns like ours then I would believe that there is a kettle orbiting the sun.
It so happens that it is crazy to believe in space kettles that we haven’t seen. This isn’t because we haven’t seen them—we wouldn’t expect to see them either way. This is because they (probably) don’t exist (based on all our observations of physics). If our experiments suggested a different (perhaps less reducible) physics then it would be correct to believe in space kettles despite there being no way for the space kettle to have caused the belief.
Yes, but this is different from a generic “People being religious is some evidence that religion is true.”
P(religion is true | overwhelming professing of belief) > P(religion is true | absence of overwhelming professing of belief).
In other words, I think my two formulations are isomorphic. If we define evidence such that absence of evidence is evidence of absence, then one implication is that it is possible for some evidence to exist in favor of false propositions.
This is possible with any definition of evidence. Every bit of information you receive makes you discard some theories which have been disproven, so it’s evidence in favour of each of the ones you don’t discard. But only one of those is fully true; the others are false.
The issue is: How do you know that you aren’t just as irrational as them?
My personal answer:
I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I’m winning more than they are.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions. For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi. Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery. There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver. The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb: “It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
For the record, I’m not a member of any religion.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
It possible that you p=0 to go from 5:A to 6:B and the path created by forward-backward still goes from 5:A to 6:B.
Qiaochu_Yuan has it right—the vast majority of Christians do not constitute additional evidence.
Moreover, the Bible (Jewish, Catholic, or Protestant) describes God as an abusive jerk. Everything we know about abusive jerks says you should get as far away from him as possible. Remember that ‘something like the God of the Bible exists’ is a simpler hypothesis than Pascal’s Christianity, and in fact is true in most multiverse theories. (I hate that name, by the way. Can’t we replace it with ‘macrocosm’?)
More generally, if for some odd reason you find yourself entertaining the idea of miraculous powers, you need to compare at least two hypotheses:
*Reality allows these powers to exist, AND they already exist, AND your actions can affect whether these powers send you to Heaven or Hell (where “Heaven” is definitely better and not at all like spending eternity with a human-like sadist capable of creating Hell), AND faith in a God such as humans have imagined will send you to Heaven, AND lack of this already-pretty-specific faith will send you to Hell.
*Reality allows these powers to exist, AND humans can affect them somehow, AND religion would interfere with exploiting them effectively.
Why such a high number? I cannot imagine any odds I would take on a bet like that.
Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false? Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value? Does other people believing in Christianity indicate that they have knowledge that you don’t have?
Yes.
Yes.
Yes.
(Weakly.)
I agree completely. It’s impossible for me to imagine a scenario where a marginal believer is negative evidence in the belief—at best you can explain away the belief (“they’re just conforming” lets you approach 0 slope once it’s a majority religion w/ death penalty for apostates).
I have found this argument compelling, especially the portion about assigning a probability to the truth of Christian belief. Even if we have arguments that seem to demonstrate why it is that radically smart people believe a religion without recourse to there being good arguments for the religion, we haven’t explained why these people instead think there are good arguments. Sure, you don’t think they’re good arguments, but they do, and they’re rational agents as well.
You could say, “well they’re not rational agents, that was the criticism in the first place,” but we have the same problem that they do think they themselves are rational agents. What level do we have to approach that allows you to make a claim about how your methods for constructing probabilities trump theirs? The highest level is just, “you’re both human,” which makes valid the point that to some extent you should listen to the opinions of others. The next level “you’re both intelligent humans aimed at the production of true beliefs” is far stronger, and true in this case.
Where the Wager breaks down for me is that much more is required to demonstrate that if Christianity is true, God sends those who fail to produce Christian belief to Hell. Of course, this could be subject to the argument that many smart people also believe this corollary, but it remains true that it is an additional jump, and that many fewer Christians take it than who are simply Christians.
What takes the cake for me is asking what a good God would value. It’s a coy response for the atheist to say that a good God would understand the reasons one has for being an atheist, and that it’s his fault that the evidence doesn’t get there. The form of this argument works for me, with a nuance: Nobody is honest, and nobody deserves, as far as I can tell, any more or less pain in eternity for something so complex as forming the right belief about something so complicated. God must be able to uncrack the free will enigma and decide what’s truly important about people’s actions, and somehow it doesn’t seem that the relevant morality-stuff perfectly is perfectly predicted by religious affiliation. This doesn’t suggest that God might not have other good reasons to send people to Hell, but it seems hard to tease those out of yourself to a sufficient extent to start worrying beyond worrying about how much good you want to do in general. If God punishes for people not being good enough, the standard) method of reducing free will to remarkably low levels makes it hard to see what morality-stuff looks like. Whether or not it exists, you have the ability to change your actions by becoming more honest, more loving, and hence possibly more likely to be affiliated with the correct religion. But it seems horrible for God to make it a part of the game for you to be worrying about whether or not you go to Hell for reasons other than honesty or love. Worry about honesty and love, and don’t worry about where that leads.
In short, maybe Hell is one outcome of the decision game of life. But very likely God wrote it so that one’s acceptance of Pascal’s wager has no impact on the outcome. Sure, maybe one’s acceptance of Christianity does, but there’s nothing you can do about it, and if God is good, then this is also good.
People are not rational agents, and people do not believe in religions on the basis of “good arguments.” Most people are the same religion as their parents.
As often noted, most nonreligious parents have nonreligious children as well. Does that mean that people do not disbelieve religions on the basis of good arguments?
Your comment is subject to the same criticism we’re discussing. If any given issue has been raised, then some smart religious person is aware of it and believes anyway.
I think most people do not disbelieve religions on the basis of good arguments either. I’m most likely atheist because my parents are. The point is that you can’t treat majority beliefs as the aggregate beliefs of groups of rational agents. It doesn’t matter if for any random “good argument” some believer or nonbeliever has heard it and not been swayed, you should not expect the majority of people’s beliefs on things that do not directly impinge on their lives to be very reliable correlated with things other than the beliefs of those around them.
The above musings do not hinge on the ratio of people in a group believing things for the right reasons, only that some portion of them are.
Your consideration helps us assign probabilities for complex beliefs, but it doesn’t help us improve them. Upon discovering that your beliefs correlate with those of your parents, you can introduce uncertainty in your current assignments, but you go about improving them by thinking about good arguments. And only good arguments.
The thrust of the original comment here is that discovering which arguments are good is not straightforward. You can only go so deep into the threads of argumentation until you start scraping on your own bias and incapacities. Your logic is not magic, and neither are intuitions nor other’s beliefs. But all of them are heuristics that you can account when assigning probabilities. The very fact that others exist who are capable of digging as deep into the logic and being as skeptical of their intuitions, and who believe differently than you, is evidence that their opinion is correct. It matters little if every person of that opinion is as such, only that the best do. Because those are the only people you’re paying attention to.
[ETA: Retracted because I don’t have the aversion-defeating energy necessary to polish this, but:]
To clarify, presumably “true” here doesn’t mean all or even most of the claims of Christianity are true, just that there are some decision policies emphasized by Christianity that are plausible enough that Pascal’s wager can be justifiably applied to amplify their salience.
I can see two different groups of claims that both seem central to Christian moral (i.e. decision-policy-relevant) philosophy as I understand it, which in my mind I would keep separate if at all possible but that in Christian philosophy and dogma are very much mixed together:
The first group of claims is in some ways more practical and, to a LessWronger, more objectionable. It reasons from various allegedly supernatural phenomena to the conclusion that unless a human acts in a way seemingly concordant with the expressed preferences of the origins of those supernatural phenomena, that human will be risking some grave, essentially game theoretic consequence as well as some chance of being in moral error, even if the morality of the prescriptions isn’t subjectively verifiable. Moral error, that is, because disregarding the advice, threats, requests, policies &c. of agents seemingly vastly more intelligent than you is a failure mode, and furthermore it’s a failure mode that seemingly justifies retrospective condemnatory judgments of the form “you had all this evidence handed to you by a transhumanly intelligent entity and you chose to ignore it?” even if in some fundamental sense those judgments aren’t themselves “moral”. An important note: saying “supernaturalism is silly, therefore I don’t even have to accept the premises of that whole line of reasoning” runs into some serious Aumann problems, much more serious than can be casually cast aside, especially if you have a Pascalian argument ready to pounce.
The second group of claims is more philosophical and meta-ethical, and is emphasized more in intellectually advanced forms of Christianity, e.g. Scholasticism. One take on the main idea is that there is something like an eternal moral-esque standard etched into the laws of decision theoretic logic any deviations from which will result in pointless self-defeat. You will sometimes see it claimed that it isn’t that God is punishing you as such, it’s that you have knowingly chosen to distance yourself from the moral law and have thus brought ruin upon yourself. To some extent I think it’s merely a difference of framing born of Christianity’s attempts to gain resonance with different parts of default human psychology, i.e. something like third party game theoretic punishment-aversion/credit-seeking on one hand and first person decision theoretic regret-minimization on the other. [This branch needs a lot more fleshing out, but I’m too tired to continue.]
But note that in early Christian writings especially and in relatively modern Christian polemic, you’ll get a mess of moralism founded on insight into the nature of human psychology, theological speculation, supernatural evidence, appeals to intuitive Aumancy, et cetera. [Too tired to integrate this line of thought into the broader structure of my comment.]
I want to vote this up to encourage posting good comments even when incompletely polished; but since you formally retracted this, I can’t.
If you take the outside view, and account for the fact that sixty-something percent of people don’t believe in Christianity, it seems like (assuming you just learned that fact) you should update (a bit) towards Christianity not being true.
If you did know the percentages already, they should be already integrated in your priors, together with everything else you know about the subject.
Note that the majority of numbers are not prime. But if you write a computer program (assuming you’re quite good at it) and it tells you 11 is prime, you should probably assign a high probability to it being prime, even though the program might have a bug.