Chad Jones paper modeling AI and x-risk vs. growth

Link post

Looks like this Chad Jones paper was just posted today. Abstract:

Advances in artificial intelligence (A.I.) are a double-edged sword. On the one hand, they may increase economic growth as A.I. augments our ability to innovate or even itself learns to discover new ideas. On the other hand, many experts note that these advances entail existential risk: creating a superintelligent entity misaligned with human values could lead to catastrophic outcomes, including human extinction. This paper considers the optimal use of A.I. technology in the presence of these opportunities and risks. Under what conditions should we continue the rapid progress of A.I. and under what conditions should we stop?

And here’s how the intro summarizes the findings:

1. The curvature of utility is very important. With log utility, the models are remarkably unconcerned with existential risk, suggesting that large consumption gains that A.I. might deliver can be worth gambles that involve a 1-in-3 chance of extinction.

2. For CRRA utility with a risk aversion coefficient (γ) of 2 or more, the picture changes sharply. These utility functions are bounded, and the marginal utility of consumption falls rapidly. Models with this feature are quite conservative in trading off consumption gains versus existential risk.

3. These findings even extend to singularity scenarios. If utility is bounded — as it is in the standard utility functions we use frequently in a variety of applications in economics — then even infinite consumption generates relatively small gains. The models with bounded utility remain conservative even when a singularity delivers infinite consumption.

4. A key exception to this conservative view of existential risk emerges if the rapid innovation associated with A.I. leads to new technologies that extend life expectancy and reduce mortality. These gains are “in the same units” as existential risk and do not run into the sharply declining marginal utility of consumption. Even with the bounded utility that comes with high values of risk aversion, substantial declines in mortality rates from A.I. can make large existential risks bearable.

I’m still reading the paper; might comment on it more later.


UPDATE: I read the paper. In brief, Jones is modeling Scott Aaronson’s “Faust parameter”:

… if you define someone’s “Faust parameter” as the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions, insofar as the questions are answerable—then I confess that my Faust parameter might be as high as 0.02.

Jones calculates the ideal Faust parameter under various assumptions about utility functions and the benefits of AI, and comes up with some answers much higher than 0.02:

Image

The answers turn out to be very sensitive to the utility function: if you have a relative risk aversion parameter > 1, you are much more conservative about AI risk. But it’s also very sensitive to any mortality/​longevity improvements AI can deliver. If AI can double our lifespans, then even with a relatively risk-averse utility function, we might accept a double-digit chance of extinction in exchange.