Newcomb-like problems are pretty common thought experiments here, but I haven’t seen a bunch of my favorite reactions I’ve got when discussing it in person with people. Here’s a disorganized collection:
I don’t believe you can simulate me (“seems reasonable, what would convince you?”) -- <describes an elaborate series of expensive to simulate experiments>. This never ended in them picking one box or two, just designing ever more elaborate and hard to simulate scenarios involving things like predicting the output of cryptographically secure hashings of random numbers from chaotic sources / quantum sources.
Fuck you for simulating me. This is one of my favorites, where upon realizing that the person must consider the possibility that they are currently in an omega simulation, immediately do everything they can to be expensive and difficult to simulate. Again, this didn’t result in picking one box or two, but I really enjoyed the “Spit in the face of God” energy.
Don’t play mind games with carnies. Excepting the whole “omniscience” thing, omega coming up to you to offer you a deal with money has very “street hustler scammer” energy. A good prior for successfully not getting conned is to stick to simple, strong priors, and don’t update too strongly based on information presented. This person two-boxed, but this seems reasonable in the fast-response of “people who offer me deals on the street are trying to scam me”.
There’s probably some others I’m forgetting, but I did enjoy these most I think.
I don’t believe you can simulate me / Fuck you for simulating me.
For reference, my response would generally be a combination of these, but for somewhat different reasons. Namely: parity[1] of the first bitcoin block mined at least 2 minutes[2] after the question was asked decides whether to 2box or 1box[3]. Why? A combination of a few things:
It’s checkable after the fact.
Memorizing enough details to check it after the fact is fairly doable.
A fake-Omega cannot really e.g. just selectively choose when to ask the question.
It’s relatively immutable.
It pulls in sources of randomness from all over.
It’s difficult to spoof without either a) being detectable or b) presenting abilities that rule out most ‘mundane’ explanations.
Sure, a fake-Omega could, for instance, mine the next block themselves
...but either a) the fake-Omega has broken SHA, in which case yikes, or b) the fake-Omega has a significant amount of computational resources available.
Yes, something like parity of a different secure hash (or e.g. an HMAC, etc) of the block could be better, as e.g. someone could have built a miner that nondeterministicly fails to properly calculate a hash depending on how many ones are in the result, but meh. This is simple and good enough I think.
Giving Newcomb’s Problem to Infosec Nerds
Newcomb-like problems are pretty common thought experiments here, but I haven’t seen a bunch of my favorite reactions I’ve got when discussing it in person with people. Here’s a disorganized collection:
I don’t believe you can simulate me (“seems reasonable, what would convince you?”) -- <describes an elaborate series of expensive to simulate experiments>. This never ended in them picking one box or two, just designing ever more elaborate and hard to simulate scenarios involving things like predicting the output of cryptographically secure hashings of random numbers from chaotic sources / quantum sources.
Fuck you for simulating me. This is one of my favorites, where upon realizing that the person must consider the possibility that they are currently in an omega simulation, immediately do everything they can to be expensive and difficult to simulate. Again, this didn’t result in picking one box or two, but I really enjoyed the “Spit in the face of God” energy.
Don’t play mind games with carnies. Excepting the whole “omniscience” thing, omega coming up to you to offer you a deal with money has very “street hustler scammer” energy. A good prior for successfully not getting conned is to stick to simple, strong priors, and don’t update too strongly based on information presented. This person two-boxed, but this seems reasonable in the fast-response of “people who offer me deals on the street are trying to scam me”.
There’s probably some others I’m forgetting, but I did enjoy these most I think.
The scam might make more sense if the money is fake.
Quite a lot of scams involve money that is fake. This seems like another reasonable conclusion.
Like, every time I simulate myself in this sort of experience, almost all of the prior is dominated by “you’re lying”.
I have spent an unreasonable (and yet unsuccessful) amount of time trying to sketch out how to present omega-like simulations to my friends.
That seems reasonable—I don’t think such predictions are that feasible.
For reference, my response would generally be a combination of these, but for somewhat different reasons. Namely: parity[1] of the first bitcoin block mined at least 2 minutes[2] after the question was asked decides whether to 2box or 1box[3]. Why? A combination of a few things:
It’s checkable after the fact.
Memorizing enough details to check it after the fact is fairly doable.
A fake-Omega cannot really e.g. just selectively choose when to ask the question.
It’s relatively immutable.
It pulls in sources of randomness from all over.
It’s difficult to spoof without either a) being detectable or b) presenting abilities that rule out most ‘mundane’ explanations.
Sure, a fake-Omega could, for instance, mine the next block themselves
...but either a) the fake-Omega has broken SHA, in which case yikes, or b) the fake-Omega has a significant amount of computational resources available.
Yes, something like parity of a different secure hash (or e.g. an HMAC, etc) of the block could be better, as e.g. someone could have built a miner that nondeterministicly fails to properly calculate a hash depending on how many ones are in the result, but meh. This is simple and good enough I think.
(Or rather, long enough that any blocks already mined have had a chance to propagate.)
In this case https://blockexplorer.one/bitcoin/mainnet/blockId/720944 , which has a hash of …a914ff87, hence odd, hence 1box.