somewhat confident of Omega’s prediction
51% confidence would suffice.
Two-box expected value: 0.51 $1K + 0.49 $1.001M = $491000
One-box expected value: 0.51 $1M + 0.49 $0 = $510000
somewhat confident of Omega’s prediction
51% confidence would suffice.
Two-box expected value: 0.51 $1K + 0.49 $1.001M = $491000
One-box expected value: 0.51 $1M + 0.49 $0 = $510000
it greatly changes the “facts” in your “case study”.
Actually, does it not add another level of putting Jesus on a pedestal above everyone else?
It changes the equation when comparing Jesus to John Perry (indicating that Jesus’ suffering was greatly heroic after all), but perhaps intensifies the “Alas, somehow it seems greater for a hero to have steel skin and godlike powers.”
(Btw I’m one of the abovementioned Christians. Just thought I’d point out that the article’s point is not greatly changed.)
Isn’t this over-generalising?
“religion makes claims, not arguments, and then changes its claims when they become untenable.” “claims are all religion has got” “the religious method of claiming is just ‘because God said so’”
Which religion(s) are you talking about? I have a hard time accepting that anyone knows enough to talk about all of them.
I tend to think that the Bible and the Koran are sufficient evidence to draw our attention to the Jehovah and Allah hypotheses, respectively. Each is a substantial work of literature, claiming to have been inspired by direct communication from a higher power, and each has millions of adherents claiming that its teachings have made them better people. That isn’t absolute proof, of course, but it sounds to me like enough to privilege the hypotheses.
what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?
If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted “seems like this will work”.
Also, there is the possibility of future scenarios arising in which Bob could choose to take comparable actions, and we want to encourage him in doing so. I agree that the cases are not exactly analogous.
I know what a garage would behave like if it contained a benevolent God
Do you, though? What if that God was vastly more intelligent than us; would you understand all of His reasons and agree with all of His policy decisions? Is there not a risk that you would conclude, on balance, “There should be no ‘banned products shops’”, while a more knowledgeable entity might decide that they are worth keeping open?
We are told no such thing. We are told it’s a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
No, take another look:
in the overwhelming measure of the MWI worlds it gives the same outcome. You don’t care about a fraction that sees a different result, in all reality the result is that Omega won’t even consider giving you $10000, it only asks for your $100.
is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?
No, it’s a clear loss.
The only winning scenario is, “the coin comes down heads and you have an effective commitment to have paid if it came down tails.”
By making a binding precommitment, you effectively gamble that the coin will come down heads. If it comes down tails instead, clearly you have lost the gamble. Giving the $100 when you didn’t even make the precommitment would just be pointlessly giving away money.
This is an attempt to examine the consequences of that.
Yes, but if the artificial scenario doesn’t reflect anything in the real world, then even if we get the right answer, therefore what? It’s like being vaccinated against a fictitious disease; even if you successfully develop the antibodies, what good do they do?
It seems to me that the “beggars and gods” variant mentioned earlier in the comments, where the opportunity repeats itself each day, is actually a more useful study. Sure, it’s much more intuitive; it doesn’t tie our brains up in knots, trying to work out a way to intend to do something at a point when all our motivation to do so has evaporated. But reality doesn’t have to be complicated. Sometimes you just have to learn to throw in the pebble.
It appears the key issue in creating conflict is that the two groups must not be permitted to get to know each other and become friendly
Because then, of course, they might start attributing each other’s negative actions to environmental factors, instead of assuming them to be based on inherent evil.
A: Live 500 years and then die, with certainty. B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%
If this was the only chance you ever get to determine your lifespan—then choose B.
In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.
Well, humans can build calculators. That they can’t be the calculators that they create doesn’t demand an unusual explanation.
Yes, but don’t these articles emphasise how evolution doesn’t do miracles, doesn’t get everything right at once, and takes a very long time to do anything awesome? The fact that humans can do so much more than the normal evolutionary processes can marks us as a rather significant anomaly.
Humans can do things that evolutions probably can’t do period over the expected lifetime of the universe.
This does beg the question, How, then, did an evolutionary process produce something so much more efficient than itself?
(And if we are products of evolutionary processes, then all our actions are basically facets of evolution, so isn’t that sentence self-contradictory?)
I didn’t mean to suggest that the existence of suffering is evidence that there is a God. What I meant was, the known fact of “shared threat → people come together” makes the reality of suffering less powerful evidence against the existence of a God.
I wouldn’t trust myself to accurately predict the odds of another repetition, so I don’t think it would unravel for me. But this comes back to my earlier point that you really need some external motivation, some precommitment, because “I want the 10K” loses its power as soon as the coin comes down tails.
It should also be possible to milk the scenario for publicity: “Our opponents sold out to the evil plutocrat and passed horrible legislation so he would bankroll them!”
I wish I were more confident that that strategy would actually work...
Sorry, but I’m not in the habit of taking one for the quantum superteam. And I don’t think that it really helps to solve the problem; it just means that you don’t necessarily care so much about winning any more. Not exactly the point.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
I think that what really does my head in about this problem is, although I may right now be motivated to make a commitment, because of the hope of winning the 10K, nonetheless my commitment cannot rely on that motivation, because when it comes to the crunch, that possibility has evaporated and the associated motivation is gone. I can only make an effective commitment if I have something more persistent—like the suggested $1000 contract with a third party. Without that, I cannot trust my future self to follow through, because the reasons that I would currently like it to follow through will no longer apply.
MBlume stated that if you want to be known as the sort of person who’ll do X given Y, then when Y turns up, you’d better do X. That’s a good principle—but it too can’t apply, unless at the point of being presented with the request for $100, you still care about being known as that sort of person—in other words, you expect a later repetition of the scenario in some form or another. This applies as well to Eliezer’s reasoning about how to design a self-modifying decision agent—which will have to make many future decisions of the same kind.
Just wanting the 10K isn’t enough to make an effective precommitment. You need some motivation that will persist in the face of no longer having the possibility of the 10K.
A while ago, I came across a mathematics problem involving the calculation of the length of one side of a triangle, given the internal angles and the lengths of the other two sides. Eventually, after working through the trigonometry of it (which I have now forgotten, but could re-derive if I had to), I realised that it incorporated Pythagoras’ Theorem, but with an extra term based on the cosine of one of the angles. The cosine of 90 degrees is zero, so in a right-angled triangle, this extra term disappears, leaving Pythagoras’ Theorem as usual.
The older law that I knew turned out to be a special case of the more general law.