I’m unsure whether a different standard is needed. Foom Liability, and other such proposals, may be enough.
For those who haven’t read the post, a bit of context. AGI companies may create huge negative externalities. We fine/sue folks for doing so in other cases. So we can set up some sort of liability. In this case, we might expect a truly huge liability in plausible worlds where we get near misses from doom. Which may be more than AGI companies can afford. When entities plausibly need to pay out more than they can afford, like in health, we may require they get insurance.
What liability ahead of time would result in good incentives to avoid foom doom? Hanson suggests:
Thus I suggest that we consider imposing extra liability for certain AI-mediated harms, make that liability strict, and add punitive damages according to the formulas D= (M+H)*F^N. Here D is the damages owed, H is the harm suffered by victims, M>0,F>1 are free parameters of this policy, and N is how many of the following eight conditions contributed to causing harm in this case: self-improving, agentic, wide scope of tasks, intentional deception, negligent owner monitoring, values changing greatly, fighting its owners for self-control, and stealing non-owner property.
If we could agree that some sort of cautious policy like this seems prudent, then we could just argue over the particular values of M,F.
I’m unsure whether a different standard is needed. Foom Liability, and other such proposals, may be enough.
For those who haven’t read the post, a bit of context. AGI companies may create huge negative externalities. We fine/sue folks for doing so in other cases. So we can set up some sort of liability. In this case, we might expect a truly huge liability in plausible worlds where we get near misses from doom. Which may be more than AGI companies can afford. When entities plausibly need to pay out more than they can afford, like in health, we may require they get insurance.
What liability ahead of time would result in good incentives to avoid foom doom? Hanson suggests: