Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.
Unvarnished critical (but constructive) feedback is welcome.
[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]
Thinks longtermism rests on a false premise – some sort of total impartiality.
Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet “luddite” so long as this is understood to describe someone who:
suspects that on net, technological progress yields diminishing returns in human flourishing.
OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming’s vanishing economy dilemma and how decreased working hours offers a simple solution).
OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).
Right so you’re worried about moral hazard generated by insurance (in the case where we have liability in place). For starters, the government arguably generates moral hazard for disasters of a certain size by default: it can’t credibly commit ex ante to not bail out a critical economic sector or not provide relief to victims in the event of a major disaster: the government is always implicitly on the hook (see Moss, D. A. When All Else Fails: Government as the Ultimate Risk Manager. See the too-big-to-fail effect for an example). Charging a risk-priced premium for that service can only help.
But you’re probably more worried about private insurer’s ability to mitigate the moral hazard they generate. Insurers certainly do not always succeed at doing this. However, they sometimes not only succeed, but in fact induce more harm reduction than liability alone probably would have induced on its own (see e.g. the Insurance Institute for Highway Safety rating crashworthiness of vehicles, and the auto-industry’s lobbying for airbags in the 80s). For more see:
Ben-Shahar and Logue, “Outsourcing Regulation: How Insurance Reduces Moral Hazard”
Abraham and Schwarcz, “The Limits of Regulation by Insurance”
My research finds that, in the specific context of insuring against uncertain heavy-tail risks we can expect private insurers to engage in a mix of:
causal risk-modeling, because actuarial data will be insufficient for such rare events (cf. causal risk-modeling in nuclear insurance underwriting and premium pricing (Mustafa, 2017)(Gudgel, 2022, ch. 4 sec. VII)).
monitoring, again due to a lack of actuarial data and the need to reduce information asymmetries (cf. regular inspections by nuclear insurers with specialized engineers (2022, ch. 4 sec. VI.C)).
safety research and lobbying for stricter regulation, because insurers will almost certainly have to pool their capacity in order to offer coverage, eliminating competition and with it, coordination problems (cf. American Nuclear Insurer’s (ANI) monopoly on third party liability (2022, ch. 4 sec. VII.A)).
private loss prevention guidance, because it can’t be appropriated or drive away customers here: there will be little competition and the insurance is mandatory (cf. ANI sharing inspection reports and recommendations with policyholders (2022, ch. 4 sec. VII.A.2)).
In other words, if set up correctly, I expect them to do all the things we would want them to do.
The government also needn’t mandate specifically commercial insurance: it could also allow/encourage labs to mutualize their risk, entirely eliminating concerns about moral hazard.
You can read more about all of this here.