The title of this post (“Yes, It’s Subjective, But Why All The Crabs?”) gave me entirely the wrong idea of what the post was going to be about, and likely would have caused me to skip it if I didn’t recognize the author.
(I thought it was going to actually be about crabs, not about crabs-as-a-metaphor. I was reminded of the article There’s No Such Thing As A Tree (Phylogenetically).)
I bet you cannot find any insurance company that will insure you against willful damage that you caused with your vehicle.
Oh, what a remarkable exception!
I think you’re making a vital error by mostly-ignoring the difference between user negligence and user malice.
Changing a design to defend against user error is (often) economically efficient because it’s a central change that you make one time and it saves all the users the costs of being constantly careful, which are huge in aggregate, because the product is used a large number of times.
Changing a design to defend against user malice is (often) not economically efficient, because for users to defend against their own malice is pretty cheap (malice requires intent; “don’t use this maliciously” arguably has negative cost in effort), while making an in-advance change that defeats malicious intent is very hard (because you have an intelligent adversary, and they can react to you, and you can’t react to them).
I think Principle 3 is clearly going to put liability for writing fake reports on the person who deliberately told the AI to write a fake report, rather than on the AI maker.
Additionally, the damage that can be caused by malice is practically unbounded. This is pretty problematic for a liability regime because a single outlier event can plausibly bankrupt the company even if their long-run expected value is positive.