Thanks! I can’t help but compare this to Tyler Cowen asking AI risk to “have a model.” Except I think he was imagining something like a macroeconomic model, or maybe a climate model. Whereas you’re making a comparison to risk management, and so asking for an extremely zoomed-in risk model.
One issue is that everyone disagrees. Doses of radiation are quite predictable, the arc of new technology is not. But having stated that problem, I can already think of mitigations—e.g. regulation might provide a framework that lets the regulator ask lots of people with fancy job titles and infer distributions about many different quantities (and then problems with these mitigations—this is like inferring a distribution over the length of the emperor’s nose by asking citizens to guess). Other quantities could be placed in historical reference classes (AI Impacts’ work shows both how this is sometimes possible and how it’s hard).
That’s right and that’s a consequence of uncertainty, which prevents us from bounding risks. Decreasing uncertainty (e.g. through modelling or through the ability to set bounds) is the objective of risk management.
Doses of radiation are quite predictable
I think it’s mostly in hindsight. When you read stuff about nuclear safety in the 1970s, it’s really not how it was looking.
See Section 2
the arc of new technology is not [predictable]
I think that this sets a “technology is magic” vibe which is only valid for scaling neural nets (and probably only because we haven’t invested that much into understanding scaling laws etc.), and not for most other technologies. We can actually develop technology where we know what it’s doing before building it and that’s what we should aim for given what’s at stakes here.
Thanks! I can’t help but compare this to Tyler Cowen asking AI risk to “have a model.” Except I think he was imagining something like a macroeconomic model, or maybe a climate model. Whereas you’re making a comparison to risk management, and so asking for an extremely zoomed-in risk model.
One issue is that everyone disagrees. Doses of radiation are quite predictable, the arc of new technology is not. But having stated that problem, I can already think of mitigations—e.g. regulation might provide a framework that lets the regulator ask lots of people with fancy job titles and infer distributions about many different quantities (and then problems with these mitigations—this is like inferring a distribution over the length of the emperor’s nose by asking citizens to guess). Other quantities could be placed in historical reference classes (AI Impacts’ work shows both how this is sometimes possible and how it’s hard).
Thanks for your comment.
That’s right and that’s a consequence of uncertainty, which prevents us from bounding risks. Decreasing uncertainty (e.g. through modelling or through the ability to set bounds) is the objective of risk management.
I think it’s mostly in hindsight. When you read stuff about nuclear safety in the 1970s, it’s really not how it was looking.
See Section 2
I think that this sets a “technology is magic” vibe which is only valid for scaling neural nets (and probably only because we haven’t invested that much into understanding scaling laws etc.), and not for most other technologies. We can actually develop technology where we know what it’s doing before building it and that’s what we should aim for given what’s at stakes here.