I did not expect anyone to accept my offer! Would you be willing to elaborate on why you’re doing so? Do you believe that you’re going to make money for some reason? Or do you believe that you’re going to lose money, and that doing so is the rational action? (The latter is what CDT predicts, the former just means you think you found a flaw in my setup.)
I would be fine with a loss-limiting policy, sure. What do you propose?
I’m not sure how to resolve your Newcomb’s Basilisk, that’s interesting. My first instinct is to point out that that agent and game are strictly more complicated than Newcomb’s problem, so by Occam’s Razor they’re less likely to exist. But it’s easy to modify the Basilisk to be simpler than Omega; perhaps by saying that it tortures anyone who would ever do anything to win them $1,000,000. So I don’t think that’s actually relevant.
Isn’t this a problem for all decision theories equally? I could posit a Basilisk that tortures anyone who would two-box, and a CDT agent would still two-box.
I guess the rational policy depends on your credence that you’ll encounter any particular agent? I’m not sure, that’s a very interesting question. How we do determine which counterfactuals actually matter?
I did not expect anyone to accept my offer! Would you be willing to elaborate on why you’re doing so? Do you believe that you’re going to make money for some reason? Or do you believe that you’re going to lose money, and that doing so is the rational action? (The latter is what CDT predicts, the former just means you think you found a flaw in my setup.)
I would be fine with a loss-limiting policy, sure. What do you propose?
I’m not sure how to resolve your Newcomb’s Basilisk, that’s interesting. My first instinct is to point out that that agent and game are strictly more complicated than Newcomb’s problem, so by Occam’s Razor they’re less likely to exist. But it’s easy to modify the Basilisk to be simpler than Omega; perhaps by saying that it tortures anyone who would ever do anything to win them $1,000,000. So I don’t think that’s actually relevant.
Isn’t this a problem for all decision theories equally? I could posit a Basilisk that tortures anyone who would two-box, and a CDT agent would still two-box.
I guess the rational policy depends on your credence that you’ll encounter any particular agent? I’m not sure, that’s a very interesting question. How we do determine which counterfactuals actually matter?