The ’1400′ number was indeed picked somewhat arbitrarily and merely to illustrate the point that our life expectancy goes up even for quite levels of AI risk. I thought this point was worth making because I’ve seen the opposite claim made or insinuated by some anti-AI advocates.
Later models in the paper take into account more factors—in particular, temporal discounting and diminishing marginal utility in QALYs. In these more full-fledged models, it is not the case that we get “launch immediately” for all non-extreme settings of the parameters even if we postulate that post-superintelligence lifespans could be a billion years.
(In general, as I said in the preamble, I recommend not focussing overly on the exact numbers that pop out of these simple models. The formal models are mostly intended to serve as illustrations of various possible reasoning assumptions and the general patterns they imply.)
The ’1400′ number was indeed picked somewhat arbitrarily and merely to illustrate the point that our life expectancy goes up even for quite levels of AI risk. I thought this point was worth making because I’ve seen the opposite claim made or insinuated by some anti-AI advocates.
Later models in the paper take into account more factors—in particular, temporal discounting and diminishing marginal utility in QALYs. In these more full-fledged models, it is not the case that we get “launch immediately” for all non-extreme settings of the parameters even if we postulate that post-superintelligence lifespans could be a billion years.
(In general, as I said in the preamble, I recommend not focussing overly on the exact numbers that pop out of these simple models. The formal models are mostly intended to serve as illustrations of various possible reasoning assumptions and the general patterns they imply.)