I don’t think this solves the problem, though it is an important part of the picture.
The problem is, which conditional commitments do you make? (A conditional commitment is just a special case of a commitment) “I’ll retaliate against A by doing B, unless [insert list of exceptions here.” Thinking of appropriate exceptions is important mental work, and you might not think of all the right ones for a very long time, and moreover while you are thinking about which exceptions you should add, you might accidentally realize that such-and-such type of agent will threaten you regardless of what you commit to and then if you are a coward you will “give in” by making an exception for that agent. The problem persists, in more or less exactly the same form, in this new world of conditional commitments. (Again, which are just special cases of commitments, I think.)
you might accidentally realize that such-and-such type of agent will threaten you regardless of what you commit to and then if you are a coward you will “give in” by making an exception for that agent.
this seems like a problem for humans and badly-built AIs. Nothing that reliably one-boxes should ever do this.
Or do you mean one-boxing in Transparent Newcomb? Then your claim might be true, but even then it depends on how seriously we take the “regardless of what you commit to” clause.
True, sorry, I forgot the whole set of paradoxes that led up to FDT/UDT. I mean something like… “this is equivalent to the problem that FDT/UDT already has to solve anyways.” Allowing you to make exceptions doesn’t make your job harder.
I don’t think this solves the problem, though it is an important part of the picture.
The problem is, which conditional commitments do you make? (A conditional commitment is just a special case of a commitment) “I’ll retaliate against A by doing B, unless [insert list of exceptions here.” Thinking of appropriate exceptions is important mental work, and you might not think of all the right ones for a very long time, and moreover while you are thinking about which exceptions you should add, you might accidentally realize that such-and-such type of agent will threaten you regardless of what you commit to and then if you are a coward you will “give in” by making an exception for that agent. The problem persists, in more or less exactly the same form, in this new world of conditional commitments. (Again, which are just special cases of commitments, I think.)
I concur in general, but:
this seems like a problem for humans and badly-built AIs. Nothing that reliably one-boxes should ever do this.
EDT reliably one-boxes, but EDT would do this.
Or do you mean one-boxing in Transparent Newcomb? Then your claim might be true, but even then it depends on how seriously we take the “regardless of what you commit to” clause.
True, sorry, I forgot the whole set of paradoxes that led up to FDT/UDT. I mean something like… “this is equivalent to the problem that FDT/UDT already has to solve anyways.” Allowing you to make exceptions doesn’t make your job harder.