I’ve long felt that ‘This page is intentionally left blank’ is Epimenedes-esque. :)
Dan_Moore
What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno’s Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a ‘infliction of discomfort’ scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function—it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it’s always torture, then you never get to dust specks.
RobinZ, perhaps my understanding of the term utility differs from yours. In finance & economics, utility is a scalar (i.e., a real number) function u of wealth w, subject to:
u(w) is non-decreasing; u(w) is concave downward.
(Negative) singularities to the left are admissable.
I confess I don’t know about the history of how the utility concept has been generalized to encompass pain and pleasure. It seems a multi-valued utility function might work better than a scalar function.
I can envision a vector utility function u(x) = (a, b), where the ordering is on the first term a, unless there is a tie at negative infinity; in that case the ordering is on the second term b. b is −1 for one person-hour of minimal torture, and it’s multiplicative in persons, duration and severity >= 1. (Pain infliction of less than 1 times minimal torture severity is not considered torture.) This solves your second objection, and the other two are features of this ‘Just say no to torture’ utility function.
Quote: -any choice with a nonzero probability of leading to torture gains infinite disutility, -any torture of any duration has the same disutility—infinite, -the criteria for torture vs. non-torture become rigid—something which is almost torture is literally infinitely better than something which is barely torture,
But every choice has a nonzero probability of leading to torture.
In real life or in this example? I don’t believe this is true in real life.
Proof left to the reader?
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
I believe you should count choices that can measurably change the probability of torture. If you can’t measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.
Changes that are small enough to be beyond Heisenberg’s epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.
Changes that are within this barrier but beyond my measurement capabilities aren’t known to me; and, utility is an epistemological function. I can’t measure it, so I can’t know about it, so it doesn’t enter into my utility.
I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.
The equation involving Planck’s constant in the following link is not in dispute, and that equation does constitute an epistemological barrier:
http://en.wikipedia.org/wiki/Uncertainty_principle
Everyone has their own utility function (whether they’re honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace’s Demon. They’re probably those felt pointy jester shoes with the bells on the end.
I agree about the torture for a few seconds.
A utility function is just a way of describing the ranking of desirability of scenarios. I’m not convinced that singularities on the left can’t be a part of that description.
alicorn&robinZ: i talked about ontological parsimony.
In the sense of subtracting an angel (causality) from the head of a pin (our surfboard)? :)
To me, a utility function is a contrivance. So it’s OK if it’s contrived. It’s a map, not the territory, as illustrated above.
I take someone’s answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
This appears to be an impressive series of articles. Kudos on writing it.
The impression that I get is that the measurement problem is still common to all QM interpretations. Not so much when ‘exactly’ does decoherence occur, but when, approximately, does decoherence occur. It occurs whenever there is a measurement, and possibly (rarely) at other times, although there is no experimental evidence for the latter.
(comment edited): Consider an experiment performed which illustrates the watchdog effect. A radioactive molecule has a half-life of an hour. The molecule is repeatedly measured every second, with a resulting delay in the decay of the molecule, consistent with the hypothesis that the half-life is reset upon each measurement.
This experiment seems to show that upon measurement, something happens, whether it be called collapse of the wave function or XYZ. And, if there is no measurement, that ‘something’ does not happen.
If you think that all worlds are just as real as our world, then under the MWI interpretation you can say that the multiverse is intact. However, the series of measurements has nudged our world to a part of the multiverse where the molecule decays later than it (probably) would have.
I think this is a good heuristic.
However, another possibility is that either you or your discussant is unduly influenced by an informational cascade.
I agree. A perfect predictor is either Laplace’s Demon or a supernatural being. I don’t see why either concept is particularly useful for a rationalist.
My difficulty is in understanding why the concept of a perfect predictor is relevant to artificial intelligence.
Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb’s problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
Re: Groupthink symptom #1 - illusions of invulnerability or infallibility
The fact that the subject matter of cryonics is about an extended lifespan or second lifespan does not automatically confer this symptom of groupthink.
An example of groupthink often given is the decision process of the Bush Administration which led to the invasion of Iraq in 2003. Much of the information used to come to that decision was ‘slam dunk’ pre-invasion, but ultimately spurious or unverifiable.
The ‘delayed choice’ experiments of Wheeler & others appear to show a causality that goes backward in time. So, I would take just Box B.