It’s one thing to make lemonade out of lemons, another to proclaim that lemons are what you’d hope for in the first place.
Gary Marcus, Kluge
Relevant to deathism and many other things
It’s one thing to make lemonade out of lemons, another to proclaim that lemons are what you’d hope for in the first place.
Gary Marcus, Kluge
Relevant to deathism and many other things
A similar specific variant: http://s.wsj.net/public/resources/images/OB-DU671_0604dn_D_20090604122543.jpg
At this point one must expect to meet with an objection. ‘Well then, if even obdurate sceptics admit that the assertions of religion cannot be refuted by reason, why should I not believe in them, since they have so much on their side tradition, the agreement of mankind, and all the consolations they offer?’ Why not, indeed? Just as no one can be forced to believe, so no one can be forced to disbelieve. But do not let us be satisfied with deceiving ourselves that arguments like these take us along the road of correct thinking. If ever there was a case of a lame excuse we have it here. Ignorance is ignorance; no right to believe anything can be derived from it. In other matters no sensible person will behave so irresponsibly or rest content with such feeble grounds for his opinions and for the line he takes. It is only in the highest and most sacred things that he allows himself to do so.
Sigmund Freud, The Future of an Illusion, part VI
[heat] turned out to be something that can easily be predicted by Newton’s laws.
Surely you mean “easily in hindsight”?
This might not be seen by many, but it might make your day to hear that Amanda Knox has just been acquitted.
I am working on finishing up a philosophy paper about whether “fine-tuning” (the claim that the physical constants and initial conditions that permit the evolution of life and conscious observers are rare in the space of physically possible parameters) supports “multiverse” hypotheses according to which the cosmos is huge and is heterogeneous in its local conditions. One major argument for the view that fine-tuning does not support multiverse hypotheses is due to Ian Hacking, who claimed that this inference is analogous to an “inverse gambler’s fallacy” where a gambler enters a casino, witnesses a roll of dice resulting in double-sixes, and concludes that the people must have been throwing dice for a while.
While going through Nick Bostrom’s book Anthropic Bias, I’ve found his discussion of Hacking’s argument (and of an significantly improved recent version by Roger White, available here ) somewhat unilluminating, although I thought there must be something wrong with the argument. Going through the existing replies to this argument in the literature I’ve found counterarguments that either fail straightforwardly or (more commonly) render fine-tuning irrelevant to whether multiverse hypotheses are confirmed, degenerating into an almost a priori argument that I find very implausible. I’ve found a fairly simple way of seeing how exactly the Hacking/White argument goes wrong, by combining Bostrom’s self-sampling assumption with a technical fix independently arrived at by a few other philosophers. This solution does not generate the implausible a priori argument for the multiverse that previous approaches in the literature do, as long as the reference class (for applying the self-sampling assumption) satisfies some weak requirements.
The result is a critical review paper going through the literature while building up the concepts needed to understand the proposed solution. I’ve produced all the content by now, and am now mostly working on finishing a draft, integrating notation across sections, making it readable to philosophers with at least rudimentary knowledge of Bayesianism, and in general improving the paper to meet top-tier journal standards.
Thank you. I intend to post it in the discussion section with a request for feedback when I have finished the draft.
I fixed the second paragraph. I meant a “solution” to the challenge posed by the Hacking/White “inverse gambler’s fallacy” argument, basically just an account of where exactly the argument goes wrong.
Yes, I’ve read Bradley’s paper, and his approach is the best I’ve seen so far. It raises all the right questions and has been very helpful to me personally in giving me an idea of what form a plausible reply to the inverse gambler’s fallacy argument would take. I do indeed think his approach collapses into an argument that is almost a priori / barely sensitive to fine-tuning (unless one adopts a fairly ad hoc metaphysical view of the necessary and sufficient conditions of your existence, a view that Bradley makes explicit in a forthcoming paper). Bradley’s argument can be fixed by rejecting the methodological principle he implicitly relies on (which is the idea that the correct “selection procedure” is “biased”, in his technical sense, toward your existence; Roger White also relies on this idea, which he calls the “observation principle”) and replacing it with the self-sampling assumption with an at least moderately inclusive and universe-neutral reference class.
Quite a few Christian denominations don’t think that souls go to heaven immediately after death. Seventh-Day-Adventists, for example, believe that you’re basically dead until Judgment Day, when you will be resurrected in a version of your previous material body. You might want to look into the biblical textual support that SDAs and similar denominations use to justify these beliefs.
If a person can’t be sure that something even happened to them, my utility function is rounding it off to zero.
This may be already obvious to you, but such a utility function is incoherent (as made vivid by examples like the self-torturer).
I would think that hypothetical juror judgments of guilt or innocence may be a lot more prone to bias than a more “dispassionate” look at the evidence generating a probabilty estimate. Even if one should count one’s own hypothetical guilt/innocence judgment as a small bit of evidence in the right direction, explicitly trying to calibrate this judgment with one’s prior probability estimate is going to make one over-correct one’s estimate.
Speaking of iTunes U podcasts, I would really appreciate it if people could list specific courses that are not significantly hampered by the lack of visuals. Some courses (like Ben Polak’s Game Theory course at Yale, and I suspect most other courses involving math) are close to useless when one only has the sound but are much better when watched.
I recall checking out the blog a while back, upon Lukeprog’s recommendation via his blog, and leaving with a much lower opinion of the author after reading his post on the representativeness heuristic (causing me to classify him as pretty close to my model of Massimo Piatelli-Palmarini). If you check out the comment section, it looks like he thinks that your probability estimate in cases like the lawyer/engineer question should always track the frequency information you are given, because using your subjective stereotype information would be to “ignore statistics.” Although I haven’t bothered reading his stuff since, I expect that a careful look at his articles will reveal further such misunderstandings..
What is it for something to give neurobiological credence to the “bounded utility” answer to Pascal’s Mugging?
I think the trouble might come from imagining the process as a gradual process by which a dog population evolved into a tumor population (which is not what happened; the wording in the original post is pretty misleading). The dog-to-tumor part is actually the easier and less shocking part of the story. Tumors are basically just cells that by some mutation have trouble regulating cell division and then divide uncontrollably. Malignant tumors (what we call cancers) are just tumors that happen to harm the organism (and maybe metastasize). So this particular tumor was once a dog cell, just as every human cancer starts out as a human cell. The interesting part of the story is that the tumor got to have a limited ability to survive outside of the original dog’s body, and got to be able to survive within other dogs and other canids.
I recall that one popular reality check method (i.e. a reliable way of telling whether you’re dreaming or IRL) is to check the time on your watch, look away, then check again. So you can see why any activity that involves having to write stuff down and have it remain unchanged while your attention is elsewhere might not be the best LD activity.
You use the “Golem Genie” in an odd way (it figures in only a tiny portion of the paper). You introduce the thought experiment (to elicit a sense of urgency and concrete importance, I assume), and point out the analogy to superintelligence. With the exception of a few words on hedonistic utilitarianism, all the specific examples of moral theories resulting in unwanted consequences when implemented are talked about with reference to superintelligence, never mentioning the Genie again. If you want to keep the Genie part, I would keep it until you’ve gone through all the moral theories you discuss, and only at the end point out the analogy to superintelligence.
Are you playing devil’s advocate?
As Luke said, replies are on the way. Among others, Dennett, Churchland, Dreyfus, (Paul) Churchland, Jesse Prinz, and Kevin Kelly (of Ockham efficient convergence fame), agreed to reply in an upcoming issue of the Journal of Consciousness Studies, according to Chalmers’ blog.
I will be having someone rid the papers of author information and self-referential remarks before reading them.