Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual “AI lawyer” that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don’t get the optimal bargaining solution either.
Assuming each lawyer has the same incentive to lie as its client, it has an incentive to misrepresent that some preferable-to-death outcomes are “worse-than-death” (in order to force those outcomes out of the set of “feasible agreements” in hope of getting a more preferred outcome as the actual outcome), and this at equilibrium is balanced by the marginal increase in the probability of getting “everyone dies” as the outcome (due to feasible agreements becoming a null set) caused by the lie. So the probability of “everyone dies” in this game has to be non-zero.
(It’s the same kind of problem as in the AI race or tragedy of commons: people not taking into account the full social costs of their actions as they reach for private benefits.)
Of course in actuality everyone dying may not be a realistic consequence of failure to reach agreement, but if the real consequence is better than that, and the AI lawyers know this, they would be more willing to lie since the perceived downside of lying would be smaller, so you end up with a higher chance of no agreement.
Yes, it’s not a very satisfactory solution. Some alternative/complementary solutions:
Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well.
Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?)
Have the TAI learn from past data which wasn’t affected by the incentives created by the TAI. (But, is there enough information there?)
Shape the TAI’s prior about human values in order to rule out at least the most blatant lies.
Some clever mechanism design I haven’t thought of. The problem with this is, most mechanism designs rely on money and money that doesn’t seem applicable, whereas when you don’t have money there are many impossibility theorems.
Assuming each lawyer has the same incentive to lie as its client, it has an incentive to misrepresent that some preferable-to-death outcomes are “worse-than-death” (in order to force those outcomes out of the set of “feasible agreements” in hope of getting a more preferred outcome as the actual outcome), and this at equilibrium is balanced by the marginal increase in the probability of getting “everyone dies” as the outcome (due to feasible agreements becoming a null set) caused by the lie. So the probability of “everyone dies” in this game has to be non-zero.
(It’s the same kind of problem as in the AI race or tragedy of commons: people not taking into account the full social costs of their actions as they reach for private benefits.)
Of course in actuality everyone dying may not be a realistic consequence of failure to reach agreement, but if the real consequence is better than that, and the AI lawyers know this, they would be more willing to lie since the perceived downside of lying would be smaller, so you end up with a higher chance of no agreement.
Yes, it’s not a very satisfactory solution. Some alternative/complementary solutions:
Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well.
Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?)
Have the TAI learn from past data which wasn’t affected by the incentives created by the TAI. (But, is there enough information there?)
Shape the TAI’s prior about human values in order to rule out at least the most blatant lies.
Some clever mechanism design I haven’t thought of. The problem with this is, most mechanism designs rely on money and money that doesn’t seem applicable, whereas when you don’t have money there are many impossibility theorems.