Why is brain preservation not even mentioned once? Saying “launch immediately” for 50% doom, 1%/year progress is absurd if you think ASI would have a high chance of being able to recover minds from preserved brains (and would in fact choose to recover them).
Also the 1400 years thing seems ~arbitrary and like it was picked such that you end up with anything happening in those tables and plots. You could have picked a million or a billion years, which would give “launch immediately” for all non-extreme settings of parameters under this model.
The ’1400′ number was indeed picked somewhat arbitrarily and merely to illustrate the point that our life expectancy goes up even for quite levels of AI risk. I thought this point was worth making because I’ve seen the opposite claim made or insinuated by some anti-AI advocates.
Later models in the paper take into account more factors—in particular, temporal discounting and diminishing marginal utility in QALYs. In these more full-fledged models, it is not the case that we get “launch immediately” for all non-extreme settings of the parameters even if we postulate that post-superintelligence lifespans could be a billion years.
(In general, as I said in the preamble, I recommend not focussing overly on the exact numbers that pop out of these simple models. The formal models are mostly intended to serve as illustrations of various possible reasoning assumptions and the general patterns they imply.)
Why is brain preservation not even mentioned once? Saying “launch immediately” for 50% doom, 1%/year progress is absurd if you think ASI would have a high chance of being able to recover minds from preserved brains (and would in fact choose to recover them).
Also the 1400 years thing seems ~arbitrary and like it was picked such that you end up with anything happening in those tables and plots. You could have picked a million or a billion years, which would give “launch immediately” for all non-extreme settings of parameters under this model.
The ’1400′ number was indeed picked somewhat arbitrarily and merely to illustrate the point that our life expectancy goes up even for quite levels of AI risk. I thought this point was worth making because I’ve seen the opposite claim made or insinuated by some anti-AI advocates.
Later models in the paper take into account more factors—in particular, temporal discounting and diminishing marginal utility in QALYs. In these more full-fledged models, it is not the case that we get “launch immediately” for all non-extreme settings of the parameters even if we postulate that post-superintelligence lifespans could be a billion years.
(In general, as I said in the preamble, I recommend not focussing overly on the exact numbers that pop out of these simple models. The formal models are mostly intended to serve as illustrations of various possible reasoning assumptions and the general patterns they imply.)