I suggest that, given what we know about humans, the creation of an actual amoral and omnipotent third party would constitute UFAI ….
Now suppose the existence of an amoral, demiomnipotent third party that can determine if a person understands the implications of an agreement and is free from coercion, will formalize any contract iff all parties understand the implications of said contract and are free from coercion, and enforces all formalized contracts only at the request of any party to the contract. Is that UFAI, FAI, or neither?
I’ve left ‘coercion’ undefined for now; if your answer hinges on a precise point within the reasonable definition space, try to find that line.
Now suppose the existence of an amoral, demiomnipotent third party that can determine if a person understands the implications of an agreement and is free from coercion, will formalize any contract iff all parties understand the implications of said contract and are free from coercion, and enforces all formalized contracts only at the request of any party to the contract. Is that UFAI, FAI, or neither?
It’s less unfriendly than fubarobfusco’s example, but still not quite optimal, since refusing to enforce some contracts (most obviously, contracts which inflict technical externalities on third parties) increases utility.
You could weaken this conclusion by assuming that the AI can drive all transaction and contracting costs to zero, since then all Coasian-optimal contracts are made. But even that result assumes, e.g. that inequalities in marginal utility are not relevant (since otherwise a utilitarian AI will want to “redistribute” wealth—broadly understood—and use imperfect contract enforcement to do so).
Information asymmetries may also be a problem: it’s possible that Coasian reasoning can be extended to yield a constrained Pareto optimum in such cases, but I’m not at all sure about that. Even then, what if the AI is better informed than the agents are?
Agents with self-control problems can incur “internalities” to themselves. Of course self-control issues can be mitigated if the agent alters her own behavioral tendencies and sets up appropriate incentives (acting as a “principal” to herself in a principal/agent setup): nevertheless, if such possibilities are inherently limited, then imperfect enforcement of contracts could increase the agents’ utility in the long term.
Strategic considerations also pose a severe challenge to Coasian reasoning and freedom of contract more generally: if we allow all contracts, then extortion attempts may qualify as contracts and then agents will want to extort each other or evade shakedowns, and pay resource costs to do so.
Plus there might be other stuff I haven’t thought of.
What would be an example of a penalty clause that ‘inflict(s) technical externalities on third parties’? I might add the stipulation that those parties must also be parties to the contract.
I’m not asking that this entity actually do anything beyond the specific tasks related to contract enforcement that it has been assigned. It isn’t intended to bring about immortality or make perfect predictions about the future or prove that it is physically possible to fulfill a contract (I assume that every formalized contract would have a penalty clause which is provably possible, such as a monetary debt or a lien)
I did also throw in a magical ‘free from coercion’ clause, specifically to sidestep extortion. It’s a patch that I can’t figure out how to make more elegant; if “If I don’t get money now the bank will foreclose on my house, so I agree to work for [employer] for three years in exchange for a fair large advance” is allowed, why isn’t “If I don’t pay off [bookie] now, he will break my legs, so I will agree to [oppressive treatment by third party for below-market compensation]” or even “I agree to submit to [oppression] by [extortionist] in exchange for one cent and not getting shot.”
My comment was not restricted to “penalty clauses”, it includes contracts more generally. Obviously we can think of contracts which would cause negative externalities if they were enforced, e.g. through a penalty clause. Refusing to enforce some of these contracts increases utility.
Re: extortion, it’s not clear what the correct decision-theoretic analysis is. One might think that the real issue is not so much with the extortion itself (since this is in fact a contract to which Coasian arguments should apply), but with any attempt to expend real resources in order to improve one’s bargaining position:
Given that someone is willing to break my legs unless I pay him off, a contract where I pay him off and he refrains from breaking my legs is Pareto efficient and may even maximize utility if, say, the extortionist happens to have a high marginal utility of wealth compared to myself. (Of course, this is a rather convenient assumption: the real point is that pure transfers of resources do not have bad properties in the general case.)
But it is obvious that losses are involved: agents now have a perverse incentive to acquire the potential to extort others (where they didn’t have such an incentive before), and victims have an incentive to boost their own bargaining power by acquiring means of defense or by pre-committing not to pay off extortionists (and this is quite costly as well: it means that negotiation can break down, and extortionists will then want to make good on their threats).
The notion that extortion is avoided if contracts are “free from coercion” is not unproblematic. “Free from coercion” generally means respecting property rights. But how are property rights to be assigned in the first place? Should a railroad operator have the “right” to throw sparks onto a nearby field, or should the field owner be allowed to forbid trains from passing near her property unless they are fitted with a costly spark preventer? Note that here, either of the agents may seek to extort the other! So, although some property right assignments are sensible (the right not to be shot, and not to have one’s legs broken) this doesn’t really solve the problem in the general case.
The principled solution to this problem would be entering a once-and-for-all contract in which we all refrain from seeking increased bargaining power in damaging ways, but obviously the same issues apply to this contract, so we have a vicious cycle. We use a variety of solutions to deal with this, including imperfect contract enforcement, and social norms which forbid extortion: in fact, we generally try to find deals which will benefit others, as opposed to harming them.
Now suppose the existence of an amoral, demiomnipotent third party that can determine if a person understands the implications of an agreement and is free from coercion, will formalize any contract iff all parties understand the implications of said contract and are free from coercion, and enforces all formalized contracts only at the request of any party to the contract. Is that UFAI, FAI, or neither?
I’ve left ‘coercion’ undefined for now; if your answer hinges on a precise point within the reasonable definition space, try to find that line.
It’s less unfriendly than fubarobfusco’s example, but still not quite optimal, since refusing to enforce some contracts (most obviously, contracts which inflict technical externalities on third parties) increases utility.
You could weaken this conclusion by assuming that the AI can drive all transaction and contracting costs to zero, since then all Coasian-optimal contracts are made. But even that result assumes, e.g. that inequalities in marginal utility are not relevant (since otherwise a utilitarian AI will want to “redistribute” wealth—broadly understood—and use imperfect contract enforcement to do so).
Information asymmetries may also be a problem: it’s possible that Coasian reasoning can be extended to yield a constrained Pareto optimum in such cases, but I’m not at all sure about that. Even then, what if the AI is better informed than the agents are?
Agents with self-control problems can incur “internalities” to themselves. Of course self-control issues can be mitigated if the agent alters her own behavioral tendencies and sets up appropriate incentives (acting as a “principal” to herself in a principal/agent setup): nevertheless, if such possibilities are inherently limited, then imperfect enforcement of contracts could increase the agents’ utility in the long term.
Strategic considerations also pose a severe challenge to Coasian reasoning and freedom of contract more generally: if we allow all contracts, then extortion attempts may qualify as contracts and then agents will want to extort each other or evade shakedowns, and pay resource costs to do so.
Plus there might be other stuff I haven’t thought of.
What would be an example of a penalty clause that ‘inflict(s) technical externalities on third parties’? I might add the stipulation that those parties must also be parties to the contract.
I’m not asking that this entity actually do anything beyond the specific tasks related to contract enforcement that it has been assigned. It isn’t intended to bring about immortality or make perfect predictions about the future or prove that it is physically possible to fulfill a contract (I assume that every formalized contract would have a penalty clause which is provably possible, such as a monetary debt or a lien)
I did also throw in a magical ‘free from coercion’ clause, specifically to sidestep extortion. It’s a patch that I can’t figure out how to make more elegant; if “If I don’t get money now the bank will foreclose on my house, so I agree to work for [employer] for three years in exchange for a fair large advance” is allowed, why isn’t “If I don’t pay off [bookie] now, he will break my legs, so I will agree to [oppressive treatment by third party for below-market compensation]” or even “I agree to submit to [oppression] by [extortionist] in exchange for one cent and not getting shot.”
My comment was not restricted to “penalty clauses”, it includes contracts more generally. Obviously we can think of contracts which would cause negative externalities if they were enforced, e.g. through a penalty clause. Refusing to enforce some of these contracts increases utility.
Re: extortion, it’s not clear what the correct decision-theoretic analysis is. One might think that the real issue is not so much with the extortion itself (since this is in fact a contract to which Coasian arguments should apply), but with any attempt to expend real resources in order to improve one’s bargaining position:
Given that someone is willing to break my legs unless I pay him off, a contract where I pay him off and he refrains from breaking my legs is Pareto efficient and may even maximize utility if, say, the extortionist happens to have a high marginal utility of wealth compared to myself. (Of course, this is a rather convenient assumption: the real point is that pure transfers of resources do not have bad properties in the general case.)
But it is obvious that losses are involved: agents now have a perverse incentive to acquire the potential to extort others (where they didn’t have such an incentive before), and victims have an incentive to boost their own bargaining power by acquiring means of defense or by pre-committing not to pay off extortionists (and this is quite costly as well: it means that negotiation can break down, and extortionists will then want to make good on their threats).
The notion that extortion is avoided if contracts are “free from coercion” is not unproblematic. “Free from coercion” generally means respecting property rights. But how are property rights to be assigned in the first place? Should a railroad operator have the “right” to throw sparks onto a nearby field, or should the field owner be allowed to forbid trains from passing near her property unless they are fitted with a costly spark preventer? Note that here, either of the agents may seek to extort the other! So, although some property right assignments are sensible (the right not to be shot, and not to have one’s legs broken) this doesn’t really solve the problem in the general case.
The principled solution to this problem would be entering a once-and-for-all contract in which we all refrain from seeking increased bargaining power in damaging ways, but obviously the same issues apply to this contract, so we have a vicious cycle. We use a variety of solutions to deal with this, including imperfect contract enforcement, and social norms which forbid extortion: in fact, we generally try to find deals which will benefit others, as opposed to harming them.