Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. We examine optimal timing from a person-affecting stance (and set aside simulation hypotheses and other arcane considerations). Models incorporating safety progress, temporal discounting, quality-of-life differentials, and concave QALY utilities suggest that even high catastrophe probabilities are often worth accepting. Prioritarian weighting further shortens timelines. For many parameter settings, the optimal strategy would involve moving quickly to AGI capability, then pausing briefly before full deployment: swift to harbor, slow to berth. But poorly implemented pauses could do more harm than good. -- Optimal Timing for Superintelligence
Some friends and I were wondering why he seems to be emphasizing a person-affecting argument today (focused on the benefits to people alive) whereas previously he emphasized a more impersonal argument (focused on future people not alive today). For example the last sentence in Superintelligence (2014) is this:
In this book, we have attempted to discern a little more feature in what is otherwise still a relatively amorphous and negatively defined vision—one that presents as our principal moral priority (at least from an impersonal and secular perspective) the reduction of existential risk and the attainment of a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment. -- Superintelligence
Nick Bostrom has a new paper called Optimal Timing for Superintelligence.
Abstract:
Some friends and I were wondering why he seems to be emphasizing a person-affecting argument today (focused on the benefits to people alive) whereas previously he emphasized a more impersonal argument (focused on future people not alive today). For example the last sentence in Superintelligence (2014) is this:
He posted it to LessWrong as well