That seems like a useful framing. When you put it like that, I think I agree in principle that it’s reasonable to hold a product maker liable for the harms that wouldn’t have occurred without their product, even if those harms are indirect or involve misuse, because that is a genuine externality, and a truly beneficial product should be able to afford it.
However, I anticipate a few problems that I expect will cause any real-life implementation to fall seriously short of that ideal:
The product can only justly be held liable for the difference in harm, compared to the world without that product. For instance, maybe someone used AI to write a fake report, but without AI they would have written a fake report by hand. This is genuinely hard to measure, because sometimes the person wouldn’t have written a fake if they didn’t have such a convenient option, but at the same time, fake reports obviously existed before AI, so AI can’t possibly be responsible for 100% of this problem.
If you assign all liability to the product, this will discourage people from taking reasonable precautions. For instance, they might stop making even a cursory attempt to check if reports look fake, knowing that AI is on the hook for the damage. This is (in some cases) far less efficient than the optimal world, where the defender pays for defense as if they were liable for the damage themselves. In principle you could do a thing where the AI pays for the difference in defense costs plus the difference in harm-assuming-optimal-defense, instead of for actual harm given your actual defense, but calculating “optimal defense” and “harm assuming optimal defense” sounds like it would be fiendishly hard even if all parties’ incentives were aligned, which they aren’t. (And you’d have to charge AI for defense costs even in situations where no actual attack occurred, and maybe even credit them in situations where the net result is an improvement to avoid overcharging them overall?)
My model of our legal system—which admittedly is not very strong—predicts that the above two problems are hard to express within our system, that no specific party within our system believes they have the responsibility of solving them, and that therefore our system will not make any organized attempt to solve them. For instance, if I imagine trying to persuade a judge that they should estimate the damage a hand-written fake report would have generated and bill the AI company only for the difference in harm, I don’t have terribly high hopes of the judge actually trying to do that. (I am not a legal expert and am least certain about this point.)
(I should probably explain this in more detail, but I’m about to get on a plane so leaving a placeholder comment. The short answer is that these are all standard points discussed around the Coase theorem, and I should probably point people to David Friedman’s treatment of the topic, but I don’t remember which book it was in.)
That seems like a useful framing. When you put it like that, I think I agree in principle that it’s reasonable to hold a product maker liable for the harms that wouldn’t have occurred without their product, even if those harms are indirect or involve misuse, because that is a genuine externality, and a truly beneficial product should be able to afford it.
However, I anticipate a few problems that I expect will cause any real-life implementation to fall seriously short of that ideal:
The product can only justly be held liable for the difference in harm, compared to the world without that product. For instance, maybe someone used AI to write a fake report, but without AI they would have written a fake report by hand. This is genuinely hard to measure, because sometimes the person wouldn’t have written a fake if they didn’t have such a convenient option, but at the same time, fake reports obviously existed before AI, so AI can’t possibly be responsible for 100% of this problem.
If you assign all liability to the product, this will discourage people from taking reasonable precautions. For instance, they might stop making even a cursory attempt to check if reports look fake, knowing that AI is on the hook for the damage. This is (in some cases) far less efficient than the optimal world, where the defender pays for defense as if they were liable for the damage themselves.
In principle you could do a thing where the AI pays for the difference in defense costs plus the difference in harm-assuming-optimal-defense, instead of for actual harm given your actual defense, but calculating “optimal defense” and “harm assuming optimal defense” sounds like it would be fiendishly hard even if all parties’ incentives were aligned, which they aren’t. (And you’d have to charge AI for defense costs even in situations where no actual attack occurred, and maybe even credit them in situations where the net result is an improvement to avoid overcharging them overall?)
My model of our legal system—which admittedly is not very strong—predicts that the above two problems are hard to express within our system, that no specific party within our system believes they have the responsibility of solving them, and that therefore our system will not make any organized attempt to solve them.
For instance, if I imagine trying to persuade a judge that they should estimate the damage a hand-written fake report would have generated and bill the AI company only for the difference in harm, I don’t have terribly high hopes of the judge actually trying to do that. (I am not a legal expert and am least certain about this point.)
(I should probably explain this in more detail, but I’m about to get on a plane so leaving a placeholder comment. The short answer is that these are all standard points discussed around the Coase theorem, and I should probably point people to David Friedman’s treatment of the topic, but I don’t remember which book it was in.)