You still don’t answer the question. All those links are is an argument that if all times are treated as equal, actions now will be the same regardless of the final goal. You don’t say what goals you want to move to.
As for that book… Wow.
First sentences of Chapter 8 of that book: We are going whence we came. We are evolving toward the Moral Society, Teilhard’s Point Omega, Spinoza’s Intellectual Love of God, the Judaeo-Christian concept of union with God. Each of us is a holographic reflection of the creativity of God.
I don’t even know where to start, on either topic, so I won’t.
I don’t know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken—a “deal with the Devil” story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.
FWIW (very little), this is exactly how I experience shows like “Ah My Goddess!”. The main character routinely refuses to take advantage of a situation that I most certainly would. I can’t watch stuff like that.
Richard: You didn’t actually answer the question. You explained(erm, sort of) why you think Fun isn’t important, but you haven’t said what you think is. All you’ve done is use the word “important” as though it answered the question: “In the present day, a human having fun is probably more useful toward the kinds of ends I expect to be important than a human in pain.”. Great: what kinds of ends do you expect to be important?
Side comment: Subitizing (the not-counting thing; see http://en.wikipedia.org/wiki/Subitizing_and_counting ) has been rather extensively studied. I can’t find good references, but it is apparently quite amenable to expertise. I have a friend who worked in inventory auditing (i.e. counting stuff in warehouses). He got into the 7-8 range. ISTR hearing of factory workings in my psych classes that got as high as 20 in their (hyper-specialized) domains.
I’m signed up, and I consider it one of my better decisions.
I use ACS, for what it’s worth, which hasn’t been mentioned here that I’ve seen.
I’m totally missing the “N independent statements” part of the discussion; that seems like a total non-sequitur to me. Can someone point me at some kind of explanation?
Before I get going, please let me make clear that I do not
understand the math here (even Eliezer’s intuitive bayesian paper
defeated me on the first pass, and I haven’t yet had the courage to
take a second pass), so if I’m Missing The Point(tm), please tell
It seems to me that what’s missing is talking about the probability
of given level of resourcefulness of the mugger. Let me ’splain.
If I ask the mugger for more detail, there are a wide variety of
different variables that determine how resourceful the mugger claims
to be. The mugger could, upon further questioning, reveal that all
the death events are the same entity being killed in the same way,
which I call one death; given the unlikelyhood of the mugger telling
the truth in the first place, I’d not pay. Similarily, the mugger
could reveal that the deaths, while of distinct entities, happen one
at a time, and may even include time for the entities to grow up and
become functioning adults (i.e. one death every 18 years), in which
case I can almost certainly put the money to better use by giving it
On the other end of the scale, the mugger can claim infinite
resources, so that the can complete the deaths (of entirely distinct
entities, which have lives, grow up, and then are slaughtered) in an
infinitely small amount of time. If the mugger does so, they don’t
get the money, because I assign an infinitely small value to
probability of the mugger having infinite resources. Yes, the
mugger may live in a magical universe where having infinite
resources is easy, but you don’t get a
get-out-of-probability-assignment-free card because you say the word
“magic”; I still have to base my probability assignment of your
claims on the world around me, in which we don’t yet have the
computing power to simulate even one human in real time (ignoring
the software problem entirely).
Between these two extremes is an entire range of possibilities. The
important part here is that the probability I assign to “the mugger
is lying” is going to increase exponentially as their claim of
resources increases. Until the claimed rate of birth, growing, and
dying exceeds the rate of deaths we already have here on Earth, I
don’t care, because I can better spend the money here. After we
reach that point (~150K per day), I don’t care, because my
probability is something like 1/O(2^n) (Computer Science big-O
there; sorry, that’s my background) where n is the multiple of
computer resources claimed over “one mind in realtime”, so n is,
umm, 150K deaths per day = 53400000 deaths per year, 18 years for
each person, so I think n is 961200000?. That’s not even counting
the probability discount due to the ridiculousness of the whole
The point here is that I don’t care about the 3^^^^3 number; I only
care about the claimed deaths per unit time, how that compares to
the number of people currently dying on Earth (on whom I know I
can well-spend the $5) and the claimed resourcefulness of the
mugger. By the time we get up to where the 3^^^^3 number matters,
i.e. “I can kill one-onemillionth of 3^^^^3 people every realtime
year”, my probability assignment for their claimed resourcefulness
is so incredibly low (and so incredibly lower than the numbers they
are throwing at me) that I laugh and walk away.
There is not, as far as I can tell, a sweet spot where the number of
lives I might save by giving the mugger the $5 is enough more than
the number of people currently dying on Earth to offset the
ridiculously low probability I’d be assiging to the mugger’s
resourcefulness. I’d rather give the $5 to SIAI.
The experimental evidence for a purely genetic component of 0.6-0.8 is overwhelming
Erm. 0.6-0.8 what?