The ’1400′ number was indeed picked somewhat arbitrarily and merely to illustrate the point that our life expectancy goes up even for quite levels of AI risk. I thought this point was worth making because I’ve seen the opposite claim made or insinuated by some anti-AI advocates.
Later models in the paper take into account more factors—in particular, temporal discounting and diminishing marginal utility in QALYs. In these more full-fledged models, it is not the case that we get “launch immediately” for all non-extreme settings of the parameters even if we postulate that post-superintelligence lifespans could be a billion years.
(In general, as I said in the preamble, I recommend not focussing overly on the exact numbers that pop out of these simple models. The formal models are mostly intended to serve as illustrations of various possible reasoning assumptions and the general patterns they imply.)
The go/no-go model is not meant to show that a P(doom) of up to 97% is “acceptable” (or at least it would risk being highly misleading to say that). The model is only meant to show that up to that level of risk, launching superintelligence increases life expectancy under the given assumptions. That model ignores many important factors (such as distributional considerations and diminishing marginal utility in QALYs), which is why a series of more complicated models are introduced that take into account some of these other factors. (Even the most elaborate of the models introduced is still only very schematic and leaves out much that is relevant, as all formal models of this sort do. “For these and other reasons, the preceding analysis—although it highlights several relevant considerations and tradeoffs—does not on its own imply support for any particular policy prescriptions.”).
By the way, there may also be reasons to regard implementing a lottery that would involve going out and killing some random subset of the human population differently from allowing technological progress to continue—even if we were to stipulate that the two cases were exactly parallel with respect to some set of consequentialist outcome metrics. (Also, while in your example, using randomization would equalize people’s chances or ex ante expected lifespans, it would lead to radically uneven ex post outcomes. Some people with egalitarian intuitions care about inequality of outcomes, not only inequality of chances or opportunities—especially in cases where the inequality of outcomes is not connected to personal motivations, efforts, or choices.)