This idea/plan seems to legitimize giving founders and early investors of AGI companies extra influence on or ownership of the universe (or just extremely high financial returns, if they were to voluntarily sell some shares to the public as envisioned here), which is hard for me to stomach from a fairness or incentives perspective, given that I think such people made negative contributions to our civilizational trajectory by increasing x-risk.
One question is whether a different standard should be applied in this case than elsewhere in our capitalist economy (where, generally, the link between financial rewards and positive or negative contributions to xrisk reduction is quite tenuous). One could argue that this is the cooperative system we have in place, and that there should be a presumption against retroactively confiscating people who invested their time or money on the basis of the existing rules. (Adjusting levels of moral praise in light of differing estimations of the nature of somebody’s actions or intentions may be a more appropriate place for this type of consideration to feed in. Though it’s perhaps also worth noting that the prevailing cultural norms at the time, and still today, seem to favor contributing to the development more advanced AI technologies.)
Furthermore, it would be consistent with the OGI model for governments (particularly the host government) to take some actions to equalize or otherwise adjust outcomes. For example, many countries, including the U.S., have a progressive taxation system, and one could imaging adding some higher tax brackets beyond those that currently exist—such as an extra +10% marginal tax rate for incomes or capital gains exceeding 1 trillion dollars, or exceeding 1% of GDP, or whatever. (In the extreme, if taxation rates began approaching 100%, this would become confiscatory and would be incompatible with the OGI model; but there is plenty of room below that for society to choose some level of redistribution.)
I’m unsure whether a different standard is needed. Foom Liability, and other such proposals, may be enough.
For those who haven’t read the post, a bit of context. AGI companies may create huge negative externalities. We fine/sue folks for doing so in other cases. So we can set up some sort of liability. In this case, we might expect a truly huge liability in plausible worlds where we get near misses from doom. Which may be more than AGI companies can afford. When entities plausibly need to pay out more than they can afford, like in health, we may require they get insurance.
What liability ahead of time would result in good incentives to avoid foom doom? Hanson suggests:
Thus I suggest that we consider imposing extra liability for certain AI-mediated harms, make that liability strict, and add punitive damages according to the formulas D= (M+H)*F^N. Here D is the damages owed, H is the harm suffered by victims, M>0,F>1 are free parameters of this policy, and N is how many of the following eight conditions contributed to causing harm in this case: self-improving, agentic, wide scope of tasks, intentional deception, negligent owner monitoring, values changing greatly, fighting its owners for self-control, and stealing non-owner property.
If we could agree that some sort of cautious policy like this seems prudent, then we could just argue over the particular values of M,F.
One question is whether a different standard should be applied in this case than elsewhere in our capitalist economy (where, generally, the link between financial rewards and positive or negative contributions to xrisk reduction is quite tenuous). One could argue that this is the cooperative system we have in place, and that there should be a presumption against retroactively confiscating people who invested their time or money on the basis of the existing rules. (Adjusting levels of moral praise in light of differing estimations of the nature of somebody’s actions or intentions may be a more appropriate place for this type of consideration to feed in. Though it’s perhaps also worth noting that the prevailing cultural norms at the time, and still today, seem to favor contributing to the development more advanced AI technologies.)
Furthermore, it would be consistent with the OGI model for governments (particularly the host government) to take some actions to equalize or otherwise adjust outcomes. For example, many countries, including the U.S., have a progressive taxation system, and one could imaging adding some higher tax brackets beyond those that currently exist—such as an extra +10% marginal tax rate for incomes or capital gains exceeding 1 trillion dollars, or exceeding 1% of GDP, or whatever. (In the extreme, if taxation rates began approaching 100%, this would become confiscatory and would be incompatible with the OGI model; but there is plenty of room below that for society to choose some level of redistribution.)
I’m unsure whether a different standard is needed. Foom Liability, and other such proposals, may be enough.
For those who haven’t read the post, a bit of context. AGI companies may create huge negative externalities. We fine/sue folks for doing so in other cases. So we can set up some sort of liability. In this case, we might expect a truly huge liability in plausible worlds where we get near misses from doom. Which may be more than AGI companies can afford. When entities plausibly need to pay out more than they can afford, like in health, we may require they get insurance.
What liability ahead of time would result in good incentives to avoid foom doom? Hanson suggests: