Because we want to design decision theories that are resistant to being blackmailed, but that can get involved in negotiations. If there’s no meaningful difference between the two...
A difference between the two scenarios you present is that in the first, the threat makes the blackmailed party worse off than if the threat had never been made, whether they yield to the threat or refuse. In the second, this is not true: rejecting the offer to sell the McGuffin leaves them in exactly the same position as if the offer had never been made.
I believe that is the difference you are looking for: making someone an offer they cannot refuse vs. making them an offer they can refuse.
Precommitting to rejecting an offer makes them worse off than if the precommittment had never been made. The difference seems more a question of degree than of kind.
And we can bring in some extra connotations to make the first scenario better and the second worse. For instance, the blackmail could be “spend an evening talking over this with me, then I’ll give you back the letters”, while the MacGuffin could be a cure for a universal plague or something:
“I own this bauble that can save your civilization, which will otherwise die. I precommit to rejecting any offer you make for this bauble that is less than 99.9% of the value of your civilization (so about 0.1% of your people will survive). You are, of course, at liberty to refuse.”
Precommitment should not be a feature of rational agents; I think that if we can define blackmail in a land of no precommitments, we have a pretty good definition of blackmail.
For instance, the blackmail could be “spend an evening talking over this with me, then I’ll give you back the letters”
Still formal blackmail. If the blackmailer would incur a cost from publishing the letters, then the blackmailer would not bother in the world where the blackmailee simply ignores such threats.
“I own this bauble that can save your civilization, which will otherwise die. I precommit to rejecting any offer you make for this bauble that is less than 99.9% of the value of your civilization (so about 0.1% of your people will survive). You are, of course, at liberty to refuse.
This is a problem of dividing the gains from trade. In general we still haven’t solved what is a good Schelling point for a fair division. Suppose a fair division would be to pay 50% of the value of your civilization, since the cost to the McGuffin-seller is negligible. Then you tell the McGuffin-seller that you’ve precommitted not to pay more than 10% of the value of your civilization, exhibiting changed source or something to make the precommitment credible. If the seller is a CDT maximizer, they say “Oh well” and sell at 10% which is the maximizing action from their perspective, since as a CDT agent they are ideologically committed not to take into account that they have caused themselves to be the target of this precommitment by being the sort of agent who would say “Oh well” and sell. It seems quite likely that if there is a 50%-of-value Schelling-point ‘fair division’ here then the rational action is not to accept any trade over, respectively under, 50% plus epsilon, respectively 50% minus epsilon. This may or may not end up being the same problem as formal blackmail in a completed theory, but it shares some of the same structure where you can exploit the living daylights out of CDT maximizers by moving logically first.
Precommitment should not be a feature of rational agents; I think that if we can define blackmail in a land of no precommitments, we have a pretty good definition of blackmail.
In the absence of precommitments, you can have options available that would play the same role.
This is a problem of dividing the gains from trade.
I haven’t seen any theory that clearly divides what is division of gain from trade from what isn’t. When two agents sit down with each other, and they both have the possibility of having built, or not having built, various objects that would have various positive or negative values for the other… Where does the blackmail or aggressive negotiations end, and the dividing begin?
Precommitment should not be a feature of rational agents; I think that if we can define blackmail in a land of no precommitments, we have a pretty good definition of blackmail.
Can you elaborate on this (especially the first part)?
Precommitting to rejecting an offer makes them worse off than if the precommittment had never been made.
It does not make them worse off than if the negotiation had never been opened.
I don’t see where the difficulty is in distinguishing “I have $OBJECT for sale at $MONEY. Interested?” and “Nice operation you’ve got here, you wouldn’t want anything to happen to it, would you? Just see that you do some business with me from time to time, and I can guarantee there won’t be any trouble.”
I dare say there are some radicals who see all free exchange of value as an act of violence, each party viciously withholding the good they could do for the other in order to extort a price from them, and in a virtuous society all would freely, selflessly work for the benefit of everyone but themselves without even the very idea of a return for their labour. But the world contains all manner of madness.
I don’t see where the difficulty is in distinguishing “I have $OBJECT for sale at $MONEY. Interested?” and “Nice operation you’ve got here, you wouldn’t want anything to happen to it, would you? Just see that you do some business with me from time to time, and I can guarantee there won’t be any trouble.”
The difference there seems to be the default (or disagreement) point: the assumed zero of the transaction. Then any deviation from the default is seen by whether it’s positive or negative. It’s the difference between “pay your taxes, and the police will protect you from criminals” and “pay your taxes, and the police won’t smash up your shop” (or “I can sell you this for money”, vs “I can sell you back what we stole from you, for money”). In all these cases, your paying/not paying imply the same futures, but it’s different because of the disagreement point.
But establishing disagreement points is a tricky task, and a contentious one. I was hoping there was a difference between threats and offers that didn’t involve the disagreement point.
The difference there seems to be the default (or disagreement) point: the assumed zero of the transaction.
The zero is not assumed, but objective: the state of things before the negotiation. The blackmailer specifically intends to remove the status quo as an option; the shopkeeper merely adds an option to it. Both parties know exactly what the status quo was. It is not a default, or an assumption, or anything but an objective fact that everyone involved is agreed about. An agreement point, not a disagreement point.
It is not a default, or an assumption, or anything but an objective fact that everyone involved is agreed about.
People are rarely in agreement about what disagreement point is. Especially if various entities have had a long-standing relationship with some changes.
I can’t fit this to any of the examples you gave. The Baron comes to the Countess with a threat to publish their correspondence: it is clear to both that he has unilaterally introduced a change to the status quo, to the Countess’ detriment. A McGuffin owner comes to a McGuffin collector offering a McGuffin at a fixed price: it is clear to both sides that this has introduced a new option, taking no existing options away. Everyone is in agreement. What situations are you thinking of that make “rare” the clarity of this distinction between threatening to injure and not threatening to injure?
And what about the variant when the winged sandal was going to be given to charity, but the Baron rushed in to prevent that, arriving just in time?
Here it’s clear that the Baron still has legal ownership (just!), but that it’s the Baron who’s changing the status quo.
You could argue that a lot of law is about specifying what the disagreement point is (generally through ownership rules and contract law), but that doesn’t mean that our legal system’s choice of disagreement point comes from any intrinsic definition that makes sense (see the difficulty with intellectual property).
I rather lost interest in the winged sandal story, but for all the attempted complications, it remains quite clear. The Countess never owned it, and the Baron wants to secure his ownership first before offering it for sale. Whatever this is, it is not blackmail. Engrossing, forestalling, regrating, badgering, or cornering), perhaps, which aren’t even illegal any more in English law.
A lot of law is about specifying exact rules. The difficulty of doing so, precisely enough to decide cases, does not imply that there is anything philosophically problematic.
The agreement they had: an explicit stipulation of a car for £100, and a reasonable presumption on both sides that the car would be black. Agent A is breaking the contract by demanding more. This is not a difficult example.
The zero is not assumed, but objective: the state of things before the negotiation.
If before the negotiation, a landslide was already closing on your (uninsured) country house, then after the negotiation the “state of things” is going towards the negative, for reasons unrelated to the negotiation. The question here is about the supposed distinction between that landslide and your opponent’s decision algorithm.
Define a pre-commitment by you to be blackmail if it makes me wish that I’d pre-pre-committed (and, of course, let you know that I’d pre-pre-committed) to not do the thing that you want in the event that you made that pre-commitment.
How does that do?
EDIT: Thinking about it more, this problem is just division of gains from trade. I’ll explain that more in a top-level comment.
Because we want to design decision theories that are resistant to being blackmailed, but that can get involved in negotiations. If there’s no meaningful difference between the two...
A difference between the two scenarios you present is that in the first, the threat makes the blackmailed party worse off than if the threat had never been made, whether they yield to the threat or refuse. In the second, this is not true: rejecting the offer to sell the McGuffin leaves them in exactly the same position as if the offer had never been made.
I believe that is the difference you are looking for: making someone an offer they cannot refuse vs. making them an offer they can refuse.
Precommitting to rejecting an offer makes them worse off than if the precommittment had never been made. The difference seems more a question of degree than of kind.
And we can bring in some extra connotations to make the first scenario better and the second worse. For instance, the blackmail could be “spend an evening talking over this with me, then I’ll give you back the letters”, while the MacGuffin could be a cure for a universal plague or something:
“I own this bauble that can save your civilization, which will otherwise die. I precommit to rejecting any offer you make for this bauble that is less than 99.9% of the value of your civilization (so about 0.1% of your people will survive). You are, of course, at liberty to refuse.”
Precommitment should not be a feature of rational agents; I think that if we can define blackmail in a land of no precommitments, we have a pretty good definition of blackmail.
Still formal blackmail. If the blackmailer would incur a cost from publishing the letters, then the blackmailer would not bother in the world where the blackmailee simply ignores such threats.
This is a problem of dividing the gains from trade. In general we still haven’t solved what is a good Schelling point for a fair division. Suppose a fair division would be to pay 50% of the value of your civilization, since the cost to the McGuffin-seller is negligible. Then you tell the McGuffin-seller that you’ve precommitted not to pay more than 10% of the value of your civilization, exhibiting changed source or something to make the precommitment credible. If the seller is a CDT maximizer, they say “Oh well” and sell at 10% which is the maximizing action from their perspective, since as a CDT agent they are ideologically committed not to take into account that they have caused themselves to be the target of this precommitment by being the sort of agent who would say “Oh well” and sell. It seems quite likely that if there is a 50%-of-value Schelling-point ‘fair division’ here then the rational action is not to accept any trade over, respectively under, 50% plus epsilon, respectively 50% minus epsilon. This may or may not end up being the same problem as formal blackmail in a completed theory, but it shares some of the same structure where you can exploit the living daylights out of CDT maximizers by moving logically first.
In the absence of precommitments, you can have options available that would play the same role.
I haven’t seen any theory that clearly divides what is division of gain from trade from what isn’t. When two agents sit down with each other, and they both have the possibility of having built, or not having built, various objects that would have various positive or negative values for the other… Where does the blackmail or aggressive negotiations end, and the dividing begin?
Can you elaborate on this (especially the first part)?
It does not make them worse off than if the negotiation had never been opened.
I don’t see where the difficulty is in distinguishing “I have $OBJECT for sale at $MONEY. Interested?” and “Nice operation you’ve got here, you wouldn’t want anything to happen to it, would you? Just see that you do some business with me from time to time, and I can guarantee there won’t be any trouble.”
I dare say there are some radicals who see all free exchange of value as an act of violence, each party viciously withholding the good they could do for the other in order to extort a price from them, and in a virtuous society all would freely, selflessly work for the benefit of everyone but themselves without even the very idea of a return for their labour. But the world contains all manner of madness.
The difference there seems to be the default (or disagreement) point: the assumed zero of the transaction. Then any deviation from the default is seen by whether it’s positive or negative. It’s the difference between “pay your taxes, and the police will protect you from criminals” and “pay your taxes, and the police won’t smash up your shop” (or “I can sell you this for money”, vs “I can sell you back what we stole from you, for money”). In all these cases, your paying/not paying imply the same futures, but it’s different because of the disagreement point.
But establishing disagreement points is a tricky task, and a contentious one. I was hoping there was a difference between threats and offers that didn’t involve the disagreement point.
I’m not sure about the others, but in the taxes/police example, the implied futures in the pay/not pay are not the same:
“pay your taxes, and the police will protect you from criminals” means if you don’t pay, P(shop smashed) = X, if you pay P(shop smashed) << X.
“pay your taxes, and the police won’t smash up your shop” means if you don’t pay, P(shop smashed) = X, if you pay P(shop smashed) >> X.
(Note that X is the same for both scenarios. That is, P(shop smashed|taxes not payed) does not depend on which scenario the police chooses.)
The zero is not assumed, but objective: the state of things before the negotiation. The blackmailer specifically intends to remove the status quo as an option; the shopkeeper merely adds an option to it. Both parties know exactly what the status quo was. It is not a default, or an assumption, or anything but an objective fact that everyone involved is agreed about. An agreement point, not a disagreement point.
People are rarely in agreement about what disagreement point is. Especially if various entities have had a long-standing relationship with some changes.
I can’t fit this to any of the examples you gave. The Baron comes to the Countess with a threat to publish their correspondence: it is clear to both that he has unilaterally introduced a change to the status quo, to the Countess’ detriment. A McGuffin owner comes to a McGuffin collector offering a McGuffin at a fixed price: it is clear to both sides that this has introduced a new option, taking no existing options away. Everyone is in agreement. What situations are you thinking of that make “rare” the clarity of this distinction between threatening to injure and not threatening to injure?
And what about the variant when the winged sandal was going to be given to charity, but the Baron rushed in to prevent that, arriving just in time?
Here it’s clear that the Baron still has legal ownership (just!), but that it’s the Baron who’s changing the status quo.
You could argue that a lot of law is about specifying what the disagreement point is (generally through ownership rules and contract law), but that doesn’t mean that our legal system’s choice of disagreement point comes from any intrinsic definition that makes sense (see the difficulty with intellectual property).
I rather lost interest in the winged sandal story, but for all the attempted complications, it remains quite clear. The Countess never owned it, and the Baron wants to secure his ownership first before offering it for sale. Whatever this is, it is not blackmail. Engrossing, forestalling, regrating, badgering, or cornering), perhaps, which aren’t even illegal any more in English law.
A lot of law is about specifying exact rules. The difficulty of doing so, precisely enough to decide cases, does not imply that there is anything philosophically problematic.
Ok, try my example here:
http://lesswrong.com/r/discussion/lw/i07/semiopen_thread_blackmail/9dt9
What is the status quo there? The black car, or the green car, or just a car (colour unspecified)?
The agreement they had: an explicit stipulation of a car for £100, and a reasonable presumption on both sides that the car would be black. Agent A is breaking the contract by demanding more. This is not a difficult example.
If before the negotiation, a landslide was already closing on your (uninsured) country house, then after the negotiation the “state of things” is going towards the negative, for reasons unrelated to the negotiation. The question here is about the supposed distinction between that landslide and your opponent’s decision algorithm.
I can’t find the example this is from.
Can we just use that as the definition?
Define a pre-commitment by you to be blackmail if it makes me wish that I’d pre-pre-committed (and, of course, let you know that I’d pre-pre-committed) to not do the thing that you want in the event that you made that pre-commitment.
How does that do?
EDIT: Thinking about it more, this problem is just division of gains from trade. I’ll explain that more in a top-level comment.
That’s the problem—every example I’ve come up with is covered by that definition.