Alright, I’ll bite. As a CDT fan, I will happily take the 25 dollars. I’ll email you on setting up the experiment. If you’d like, we could have a third party hold money in escrow?
I’m open to some policy which will ceiling our losses if you don’t want to risk $2050, or conversely, something which will give a bonus if one of us wins by more than $5 or something.
As far as Newcomb’s problem goes, what if you find a super intelligent agent that says it tortures and kills anyone who would have oneboxed in Newcomb? This seems roughly as likely to me as finding the omega from the original problem. Do you still think the right thing to do now is commit to oneboxing before you have any reason to think that commitment has positive EV?
I did not expect anyone to accept my offer! Would you be willing to elaborate on why you’re doing so? Do you believe that you’re going to make money for some reason? Or do you believe that you’re going to lose money, and that doing so is the rational action? (The latter is what CDT predicts, the former just means you think you found a flaw in my setup.)
I would be fine with a loss-limiting policy, sure. What do you propose?
I’m not sure how to resolve your Newcomb’s Basilisk, that’s interesting. My first instinct is to point out that that agent and game are strictly more complicated than Newcomb’s problem, so by Occam’s Razor they’re less likely to exist. But it’s easy to modify the Basilisk to be simpler than Omega; perhaps by saying that it tortures anyone who would ever do anything to win them $1,000,000. So I don’t think that’s actually relevant.
Isn’t this a problem for all decision theories equally? I could posit a Basilisk that tortures anyone who would two-box, and a CDT agent would still two-box.
I guess the rational policy depends on your credence that you’ll encounter any particular agent? I’m not sure, that’s a very interesting question. How we do determine which counterfactuals actually matter?
Alright, I’ll bite. As a CDT fan, I will happily take the 25 dollars. I’ll email you on setting up the experiment. If you’d like, we could have a third party hold money in escrow?
I’m open to some policy which will ceiling our losses if you don’t want to risk $2050, or conversely, something which will give a bonus if one of us wins by more than $5 or something.
As far as Newcomb’s problem goes, what if you find a super intelligent agent that says it tortures and kills anyone who would have oneboxed in Newcomb? This seems roughly as likely to me as finding the omega from the original problem. Do you still think the right thing to do now is commit to oneboxing before you have any reason to think that commitment has positive EV?
I did not expect anyone to accept my offer! Would you be willing to elaborate on why you’re doing so? Do you believe that you’re going to make money for some reason? Or do you believe that you’re going to lose money, and that doing so is the rational action? (The latter is what CDT predicts, the former just means you think you found a flaw in my setup.)
I would be fine with a loss-limiting policy, sure. What do you propose?
I’m not sure how to resolve your Newcomb’s Basilisk, that’s interesting. My first instinct is to point out that that agent and game are strictly more complicated than Newcomb’s problem, so by Occam’s Razor they’re less likely to exist. But it’s easy to modify the Basilisk to be simpler than Omega; perhaps by saying that it tortures anyone who would ever do anything to win them $1,000,000. So I don’t think that’s actually relevant.
Isn’t this a problem for all decision theories equally? I could posit a Basilisk that tortures anyone who would two-box, and a CDT agent would still two-box.
I guess the rational policy depends on your credence that you’ll encounter any particular agent? I’m not sure, that’s a very interesting question. How we do determine which counterfactuals actually matter?