Some vague idea: Alignment can be fragile. Can capabilities be made fragile too?
I think fragile capabilities can be potentially useful in situations that needs to prevent tampering the model, eg finetuning a model to jailbreak / learn dangerous bioweapon capabilities.
While price gouging can quickly mobilize forces to satisfy the emergency demand, they can also have problematic second-order effects. If a price gouge is too high, then this allows certain agents to benefit from the disaster. This creates a perverse incentive that disincentivizes disaster prevention, and potentially even incentivizing artifically intensifying / creating disasters.