A somewhat orthogonal hypothesis that I was thinking about for some time: if we develop a rigorous definition of intelligence (sounds plausible), it may be possible to prove mathematically that it is unstable. Or maybe to prove that it is stable. And I don’t mean just humans going extinct, but any possible intelligence, including ASI.
In other words, even in a maximally friendly universe p(doom) is exactly 1 (or 0, for the opposite result).
The trick is, of course, to create the math necessary to describe the system, beginning with definitions.
A somewhat orthogonal hypothesis that I was thinking about for some time: if we develop a rigorous definition of intelligence (sounds plausible), it may be possible to prove mathematically that it is unstable. Or maybe to prove that it is stable. And I don’t mean just humans going extinct, but any possible intelligence, including ASI.
In other words, even in a maximally friendly universe p(doom) is exactly 1 (or 0, for the opposite result).
The trick is, of course, to create the math necessary to describe the system, beginning with definitions.