There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always—and any other method gives you the wrong answer (give me your method and I’ll show you).
The key to the paradox here is not the simulated realities, or even the sleeping beauty part—it’s the fact that the amount of times you are awoken depends upon your decision! That’s what breaks it; if it were not the case, it doesn’t fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you’d expect.
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always
You have identified a shortcut that seems to rely on a certain assumption. It sounds like you have identified a way to violate that assumption and will hopefully not make that mistake again. There’s no paradox. Just lazy math.
and any other method gives you the wrong answer (give me your method and I’ll show you).
Method? I didn’t particularly have a cached algorithm to fall back on. So my method was “Read problem. Calculate outcomes for cooperate and defect in each situation. Multiply by appropriate weights. Try not to do anything stupid and definitely don’t consider tails worth more than heads based on a gimmick.”
If you have an example where most calculations people make would give the wrong answer then I’d be happy to tackle it.
There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always—and any other method gives you the wrong answer (give me your method and I’ll show you).
The key to the paradox here is not the simulated realities, or even the sleeping beauty part—it’s the fact that the amount of times you are awoken depends upon your decision! That’s what breaks it; if it were not the case, it doesn’t fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you’d expect.
(Small style note: it’d be better if you quoted that text using a ‘>’, or used real italics, which in Markdown are underscores ‘_’ instead of tags.)
You have identified a shortcut that seems to rely on a certain assumption. It sounds like you have identified a way to violate that assumption and will hopefully not make that mistake again. There’s no paradox. Just lazy math.
Method? I didn’t particularly have a cached algorithm to fall back on. So my method was “Read problem. Calculate outcomes for cooperate and defect in each situation. Multiply by appropriate weights. Try not to do anything stupid and definitely don’t consider tails worth more than heads based on a gimmick.”
If you have an example where most calculations people make would give the wrong answer then I’d be happy to tackle it.