“Undying truths can always be rediscovered no matter how many times they’re lost.”
At potentially prohibitive expense. Can you imagine trying to start physics over again, from the beginning?
“Undying truths can always be rediscovered no matter how many times they’re lost.”
At potentially prohibitive expense. Can you imagine trying to start physics over again, from the beginning?
Eliezer: “As far as I know, [Rand] wasn’t particularly good at math.”
A relevant passage from Barbara Branden’s biography of Rand:
“The subject [Rand] most enjoyed during her high school years, the one subject of which she never tired, was mathematics. ‘My mathematics teacher was delighted with me. When I graduated, he said, “It will be a crime if you don’t go into mathematics.” I said only, “That’s not enough of a career.” I felt that it was too abstract, it had nothing to do with real life. I loved it, but I didn’t intend to be an engineer or to go into any applied profession, and to study mathematics as such seemed too ivory tower, too purposeless—and I would say so today.’ Mathematics, she thought, was a method. Like logic, it was an invaluable tool, but it was a means to an end, not an end in itself. She wanted an activity that, while drawing on her theoretical capacity, would unite theory and its practical application. That desire was an essential element in the continuing appeal that fiction held for her: fiction made possible the integration of wide abstract principles and their direct expression in and application to man’s life.” (Barbara Branden, The Passion of Ayn Rand, page 35 of my edition)
I still like Tom’s the best.
“Requiring someone to laugh in order to prove their non-cultishness [...] doesn’t quite work.”
But if they don’t laugh, and it’s not sufficiently obvious that the joke is too obvious, doesn’t the lack of laughter serve as (rather weak) Bayesian evidence of cultishness?
Which suggests—
Q: How many Overcoming Bias readers does it take to change a lightbulb? A: None; the RAND experiment showed that lightbulbs are worthless.
Q: How many Overcoming Bias readers does it take to change a lightbulb? A: Just one, but first they have to calculate (P(change|light)P(light))/((P(change|light)P(light) + P(change|no-light)*P(no-light)).
“How many Overcoming Bias readers does it take to change a lightbulb?” “I think four. What do you think?” “Um, I was going to say ‘One,’ and then the punchl—” “Okay, then 2.5?”
If you don’t find the above funny, consider raising P(we’re-a-cult).
Steven wins the thread.
Ian, are you arguing that the concept of omnipotence is incoherent, or merely (as Michael seems to have interpreted you:) that we have no reason to believe that any omnipotent entity actually exists?
If you really mean the latter, then I suspect most people here will agree with you: if one does not observe any evidence for omnipotence, and one accepts Occam’s razor (as reasonable people do), then one concludes that no omnipotent entity exists, unless and until strong evidence to the contrary comes up.
But it remains the case that the idea of omnipotence is compatible with the evidence. The religious can, without logical self-contradiction, claim that God-in-Her-Infinite-Wisdom chooses to make created objects behave in predictable ways. It’s true that one would be silly to believe this story: that would be violating Occam’s razor, “starting with imagination, and then using reality only as a test”—however you want to phrase it—but it’s not contradictory.
If you want to show that an omnipotent entity cannot exist (that P(God-exists) is closer to, say, P(1+1=9) than P(there’s-an-invisible-unicorn-following-you)), you have to do a little more work. Fortunately, it’s already been done (see Caledonian’s comment).
Silas, does the “null world” count?
Poke, consideration of the possibility of being in the matrix doesn’t necessarily require “an exceptionally weird sort of skepticism.” It might only require an “exceptionally weird” form of futurism.
Cumulant, I think the idea behind “infinite set atheism” is not that limits don’t exist, but that that infinities are acceptable only as limits approached in a specified way. On this view, limits are not a consequence of infinite sets, as you contend; rather, only the limit exists, and the infinite set or sequence is merely a sloppy way of thinking about the limit.
Eliezer, I’ll second Matthew’s suggestion above that you write a post on infinite set atheism; it looks as if we don’t understand you.
I think I understand the motive for rejecting infinite sets (viz., that whenever you deal with infinites you get all sorts of ridiculously counterintuitive results—sums coming out different when you reärrange the terms, the Banach-Tarski paradox, &c., &c.), but I’m not sure you can give up infinite sets without also giving up the real numbers (as others have touched on above), which seems very wrong.
That’s easy, Cryonics. Free will doesn’t exist, except in the compatibilist sense.
Misanthropist, the error is in line two. a ln b = ln (a^b) holds only for positive a.
I think defenses of the subject’s choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.
But seriously?---why?
When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the “utility bonus for certainty” as a function of how certain we are. It’s not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?
“Polling people to find if they will take a dust speck grants an external harm to the torture (e.g., mental distress at the thought of someone being tortured).”
Unknown, I’ll bite. While you do point out some extremely counterintuitive consequences of positing that harms aggregate to an asymptote, accepting the dust specks as being worse than the torture is also extremely counterintuitive to most people.
For the moment, I accept the asymptote position, including the mathematical necessity you’ve pointed out.
So far this discussion has focused on harm to persons. But there are other forms of utility and disutility. Here’s the intuition pump I used on myself: the person concept is not so atomic to resist quantification—surely chimpanzees and dogs and fish and such must factor into humane utility calculations, even if they are not full persons. So are we then to prefer a universe with 3^^^3 banana slugs in it and no other life, over our own universe which contains (a much smaller number of) beings capable of greater feelings and thought? Absurd!
Perhaps in most realistic situations, the same experience happening to two different entities should count as almost exactly twice as good or bad as one instance of the experience. But I don’t think we should extend that intuition to these extreme cases with numbers like 3^^^3, else we must consider it an improvement when (say) Eliezer’s buggy AI decides to replace us with an incomprehensible number of slugs, each of which counts as one hundred-thousandth of a person.
At some point, the same experience repeated over and over again just doesn’t count.
“So I hereby retract my argument against voting, Pascal’s Mugging, and Pascal’s Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted.”
I’m certainly glad you think so, Unknown, because I was just contacted by the Dark Lords of the Matrix. It turns out that we are living in a simulation. I have no idea what the physics of the world outside are like, but they’re claiming that unless you personally send $100 to SIAI right now, they’re going to put one dust speck in the eye of each of BusyBeaver(BusyBeaver(BusyBeaver(3^^^^^^^^^^^^^^^^^^^3))!!)! people.
Get out your checkbook, quickly, before it’s too late!
Unknown, I think the slugs are relevant. I should think most of us would agree that all other things being equal, a world with less pain is better than one with more, and a world with more intelligent life is better than one with less.
Defenders of SPECKS argue that the quality of pain absolutely matters: that the pain of no amount of dust specks could add up to that of torture. To do this, they must accept the awkward position that the badness of an experience partially depends on how many other people have suffered it. Defenders of TORTURE say, “Shut up and multiply.”
Defenders of HUMANS say that the quality of personhood absolutely matters: that the goodness of no amount of existing slugs could add up to that of existing humans. To do this, they must accept the awkward position that the goodness of an entity existing partially depends on what other kinds of entities exist. Hypothetical defenders of SLUGS say, “Shut up and multiply.”
Aren’t the situations similar?
It’s probably just that I’m stupid, but I don’t understand the anthropic solution to Pascal’s Mugging. Why does it matter that other people could have been asked? What if it were stipulated that the mugger threatens everyone?
Maybe I should actually study Kolmogorov complexity before trying to grapple with such matters.
Cryonics, if you’re going to live in a computer, there’s little point in actually sending the hardware to the restaurant; virtual-reality telepresence will do just as well. It would also be cheaper to just simulate the sensation of eating osso bucco. Personally, I’m planning on just taking BART, but to each her own. See you there!
Caledonian: “Truth does not require guarding.”
Doesn’t it, though? If a minority that happens to know a truth, but they all keep quiet about it, what’s to keep the masses from remaining ignorant indefinitely? Of course (tautologically) the truth will still be true whether or not anyone knows it, but I get the sense that you were implying something less trivial.