Principles For Product Liability (With Application To AI)

There were several responses to What I Would Do If I Were Working On AI Governance which focused on the liability section, and had similar criticisms. In particular, I’ll focus on this snippet as a good representative:

Making cars (or ladders or knives or printing presses or...) “robust to misuse”, as you put it, is not the manufacturer’s job.

The commenter calls manufacturer liability for misuse “an absurd overreach which ignores people’s agency in using the products they purchase”. Years ago I would have agreed with that; it’s an intuitive and natural view, especially for those of us with libertarian tendencies. But today I disagree, and claim that that’s basically not the right way to think about product liability, in general.

With that motivation in mind: this post lays out some general principles for thinking about product liability, followed by their application to AI.

Principle 1: “User Errors” Are Often Design Problems

There’s this story about an airplane (I think the B-52 originally?) where the levers for the flaps and landing gear were identical and right next to each other. Pilots kept coming in to land, and accidentally retracting the landing gear. Then everyone would be pissed at the pilot for wrecking the bottom of the plane, as it dragged along the runway at speed.

The usual Aesop of the story is that this was a design problem with the plane more than a mistake on the pilots’ part; the problem was fixed by putting a little rubber wheel on the landing gear lever. If we put two identical levers right next to each other, it’s basically inevitable that mistakes will be made; that’s bad interface design.

More generally: whenever a product will be used by lots of people under lots of conditions, there is an approximately-100% chance that the product will frequently be used by people who are not paying attention, not at their best, and (in many cases) just not very smart to begin with. The only way to prevent foolish mistakes sometimes causing problems, is to design the product to be robust to those mistakes—e.g. adding a little rubber wheel to the lever which retracts the landing gear, so it’s robust to pilots who aren’t paying attention to that specific thing while landing a plane. Putting the responsibility on users to avoid errors will always, predictably, result in errors.

The same also applies to intentional misuse: if a product is widely available, there is an approximately-100% chance that it will be intentionally misused sometimes. Putting the responsibility on users will always, predictably, result in users sometimes doing Bad Things with the product.

However, that does not mean that it’s always worthwhile to prevent problems. Which brings us to the next principle.

Principle 2: Liability Is Not A Ban

A toy example: a railroad runs past a farmer’s field. Our toy example is in ye olden days of steam trains, so the train tends to belch out smoke and sparks on the way by. That creates a big problem for everyone in the area if and when the farmer’s crops catch fire. Nobody wants a giant fire. (I think I got this example from David Friedman’s book Law’s Order, which I definitely recommend.)

Now, one way a legal system could handle the situation would be to ban the trains. One big problem with that approach is: maybe it’s actually worth the trade-off to have crop fires sometimes. Trains sure do generate a crapton of economic value. If the rate of fires isn’t too high, it may just be worth it to eat the cost, and a ban would prevent that.

Liability sidesteps that failure-mode. If the railroad is held liable for the fires, it may still choose to eat that cost. Probably the railroad will end up passing (at least some of) that cost through to consumers, and consumers will pay it, because the railroad still generates way more value than the fires destroy. Alternatively, maybe the railroad doesn’t generate more value than the fires destroy, and then the railroad is incentivized to just shut down—which is indeed the best-case outcome, if the railroad is destroying more value than it creates.

That’s one nice thing about liability, as opposed to outright bans/​requirements: liability forces a company to internalize harms, while still allowing the company to do business if the upsides outweigh the downsides.

So that’s the basic logic for relying on liability rather than bans/​requirements. Then the next question is: in the typical case where more than one party is plausibly liable to some degree, how should liability be allocated?

Principle 3: Failure Modes Of Coase’ Theorem

Continuing with the example of the steam train causing crop fires: maybe one way to avoid the fires is for the farmer to plant less-flammable crops, like clover. Insofar as that’s the cheapest way to mitigate fires, it might seem sensible to put most of the liability for fires on the farmer, so they’re incentivized to mitigate the fire by planting clover (insofar as it’s not worth it to eat the cost and just keep planting more-flammable crops).

Coase’ theorem argues that, for purposes of economic efficiency, it actually doesn’t matter who the liability is on. If the cheapest way to avoid fires is for the farmer to plant clover, but liability is on the railroad company, then the solution is for the railroad to pay the farmer to plant clover. More generally, assuming that contracts incur no overhead and everyone involved actually forms the optimal contracts, Coase’ theorem says that everyone will end up doing the same thing regardless of who’s liable. Assigning liability to one party or another just changes who’s paying who how much.

… this is not the sort of theorem which applies very robustly to the real world. I actually bring it up mainly to discuss its failure modes.

The key piece is “assuming that contracts incur no overhead and everyone involved actually forms the optimal contracts”. In practice, that gets complicated fast, and the overhead gets large fast. What we want is to allocate liability so that efficient outcomes can happen with overhead and complicated contracts minimized. Usually, that means putting liability on whoever can most cheaply mitigate the harm. If clover is the cheapest way to mitigate fires, then maybe that does mean putting the liability on farmers after all, as seems intuitively reasonable.

Putting It All Together

Summarizing those three principles:

  • Principle 1: The only way to prevent foolish mistakes or intentional misuse from sometimes causing problems, in a widely-available product, is to design the product to be robust to misuse.

  • Principle 2: One nice thing about liability as opposed to a ban/​requirement is that people can just eat the cost, if the upsides outweigh the downsides.

  • Principle 3: As a loose heuristic, liability should usually be allocated to whoever can most cheaply prevent a problem.

Now let’s put those together.

Insofar as a product is widely available, there is ~zero chance that consumers will actually avoid misusing the product in aggregate (Principle 1), even if “they’re liable” (i.e. failure is quite costly to them). Even if it’s cheap for any given user to Be Careful and Pay Attention at any given time, when multiplied across all the times the product is used by all its users, it ain’t gonna happen. The only way problems from misuse can actually be prevented, in aggregate, is by designing the product to be robust to misuse. So by Principle 3, liability for misuse should usually be allocated to the designer/​manufacturer, because they’re the only one who can realistically prevent misuse (again, in aggregate) at all. That way, product designers/​manufacturers are incentivized to think about safety proactively, i.e. actively look for ways that people are likely to misuse their products and ways to make the product less-harmful when that misuse inevitably occurs. And companies are incentivized to do all that in proportion to harm and frequency, as is economically efficient.

… and if your knee-jerk reaction is “but if product manufacturers are always liable for any harm having to do with their products, that means nobody can ever sell any products at all!”, then remember Principle 2. Liability is not a ban. Insofar as the product generates way more benefit than harm (as the extremely large majority of products do), the liability will usually get priced in, costs will be eaten by some combination of companies and consumers, and net-beneficial products will continue to be sold.

Now let’s walk through all this in the context of some examples.

Hypothetical Example: Car Misuse

We opened the post with the comment “Making cars (or ladders or knives or printing presses or...) ‘robust to misuse’, as you put it, is not the manufacturer’s job.”. So, let’s talk through what the world would look like if car manufacturers were typically liable for “misuse” of cars (both accidental and intentional).

In a typical car accident, the manufacturers of the cars involved would be liable for damages. By Principle 2, this would not mean that nobody can realistically sell cars. Instead, the manufacturer would also be the de-facto insurer, and would probably do all the usual things which car insurance companies do. Insurance would be priced into the car, and people who are at lower accident risk would be able to buy a car at lower cost.

The main change compared to our world is that manufacturers would be much more directly incentivized to make cars safer. They’d be strongly incentivized to track what kinds of accidents incur the most costly liability most frequently, and design solutions to mitigate those damages. Things like seatbelts and airbags and anti-lock brakes would probably be invented and adopted faster, with companies directly incentivized to figure out and use such solutions. Likely we’d see a lot of investment in things like e.g. sensors near the driver’s seat to detect alcohol in the air, or tech to actively prevent cell phones near the driver’s seat from receiving texts.

… and from a libertarian angle, this would actually be pretty great! With manufacturers already strongly incentivized to make cars safe, there likely wouldn’t need to be a regulatory body for that purpose. If people wanted to buy cars with no seatbelts or whatever, they could, it would just cost a bunch of extra money (to reimburse the manufacturer for their extra expected liability). And the manufacturers’ incentives would likely be better aligned with actual damages than a regulator’s incentives.

Real Example: Worker’s Comp

Jason Crawford’s excellent post How Factories Were Made Safe walks this through in detail; I’ll summarize just some highlights here.

A hundred and fifty years ago, workers were mostly responsible for their own injuries on the job, and injuries were common. Lost fingers, arms, legs, etc. Crawford opens with this example:

Angelo Guira was just sixteen years old when he began working in the steel factory. He was a “trough boy,” and his job was to stand at one end of the trough where red-hot steel pipes were dropped. Every time a pipe fell, he pulled a lever that dumped the pipe onto a cooling bed. He was a small lad, and at first they hesitated to take him, but after a year on the job the foreman acknowledged he was the best boy they’d had. Until one day when Angelo was just a little too slow—or perhaps the welder was a little too quick—and a second pipe came out of the furnace before he had dropped the first. The one pipe struck the other, and sent it right through Angelo’s body, killing him. If only he had been standing up, out of the way, instead of sitting down—which the day foreman told him was dangerous, but the night foreman allowed. If only they had installed the guard plate before the accident, instead of after. If only.

Workplace injuries are a perfect case of Principle 1: workers will definitely sometimes not pay attention, or not be maximally careful, or even outright horse around. Workers themselves were, in practice, quite lackadaisical about their own safety, and often outright resisted safety measures! If responsibility for avoiding injuries is on the workers, then in aggregate there will be lots of injuries.

Worker’s comp moved the liability to employers:

Workers’ comp is a “no-fault” system: rather than any attempt at a determination of responsibility, the employer is simply always liable (except in cases of willful misconduct). If an injury occurs on the job, the employer owes the worker a payment based on the injury, according to a fixed schedule. In exchange, the worker no longer has the right to sue for further damages.

The result obviously was not that companies stopped hiring workers. Insofar as a company’s products were worth the cost in injuries, companies generally ate the costs (or passed them through to consumers via higher prices).

And then companies were strongly incentivized to design their workplaces and rules to avoid injury. Safety devices began to appear on heavy machinery. Workplace propaganda and company rules pushed workers to actually use the safety devices. Accident rates dropped to the much lower lower levels we’re used to today.

As Crawford says:

I was also impressed with how a simple and effective change to the law set in motion an entire apparatus of management and engineering decisions that resulted in the creation of a new safety culture. It’s a case study of a classic attitude from economics: just put a price on the harm—internalize the externality—and let the market do the rest.

Again, I recommend Crawford’s post for the full story. This is very much a central example of how the sort of liability I’m arguing for is “supposed to” go.

Negative Examples: Hot Coffee, Malpractice

If John Wentworth of 10-15 years ago were reading this post, he’d counterargue: what about that case with the lady who spilled hot coffee in her lap and then sued McDonald’s for like a million dollars? And then McDonald’s switched to serving lukewarm coffee, which everyone complained about for years? According to Principle 2, what should have happened was that McDonald’s ate the cost (or passed it through to consumers), since people clearly wanted hot coffee and the occasional injury should have been an acceptable trade-off. Yet that apparently did not happen. Clearly lawsuits are out-of-control, and this whole “rely on liability” thing makes everyone wayyyyy too risk-averse.

Today, my response would be: that case notably involved punitive damages, i.e. liability far in excess of the damages actually incurred, intended to force the company to behave differently. Under the model in this post, punitive damages are absolutely terrible—they’re basically equivalent to bans/​requirements, and completely negate Principle 2. In order for liability to work, there absolutely must not be punitive damages.

(There is one potential exception here: excess damages could maybe make sense in cases where damage is common but relatively few people bring a claim against the company. But the main point here is that, if excess damages are used to force a company to do something, then obviously that breaks Principle 2.)

John Wentworth of 10-15 years might reply: ok then, what about medical malpractice suits? Clearly those are completely dysfunctional.

To which I reply: yeah, the problem is that liability isn’t strict enough. That may seem crazy and counterintuitive, but consider a doctor ordering a bunch of not-really-necessary tests to avoid claims of malpractice. Insofar as those tests are in fact completely unnecessary, they don’t actually reduce the chance of harm. Yet (the doctor apparently believes) they reduce the risk of malpractice claims. What’s going wrong in that situation is that there are specific basically-performative things a doctor can do, which don’t actually reduce harm, but do reduce malpractice suits. On the flip side, there’s plenty of basic stuff like e.g. handwashing (or, historically, bedrails) which do dramatically reduce harm, but which doctors/​hospitals aren’t very reliable about.

A simple solution is to just make doctors/​hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/​hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.

Application to AI

Now, finally, we get to AI.

In my previous post, I talked about making AI companies de-facto liable for things like deepfakes or hallucinations or employees using language models to fake reports. Dweomite replied that these would be “Kinda like if you made Adobe liable for stuff like kids using Photoshop to create fake driver’s licenses (with the likely result that all legally-available graphics editing software will suck, forever).”.

Now we have the machinery to properly reply to that comment. In short: it’s a decent analogy (assuming there’s some lawsuit-able harm from fake driver’s licenses). The part I disagree with is the predicted result. What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license. Or they’d just eat the cost without any guardrails at all, if users really hated the guardrails and were willing to pay enough extra to cover liability.

Likewise, if AI companies were generally liable for harms from deepfakes… well, in the short term, I expect that cost would just be passed through to consumers, and consumers would keep using the software, because the fun greatly exceeds the harms. Longer term, AI companies would be incentivized to set up things like e.g. licensing for celebrity faces, detect and shut down particularly egregious users, and make their products robust against jailbreaking.

Similarly for hallucinations, or faked reports. Insofar as the products generate more value than harm, the cost of liability will be passed through to consumers, and people will keep using the products. But AI companies would be properly incentivized to know their customers, to design to mitigate damage, etc. Basically the things people want from a regulatory framework, but without relying on regulators to get everything right (and in practice not get everything right, and be Goodharted against).

That’s the kind of de-facto liability framework I’d ideally like for AI.