Tricky hypothesis 2: But the differences between the world of today and the world where ASI will be developed don’t matter for the prognosis.
I don’t think that the authors implied this. Right in the first chapter, they write:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
(emphasis by me). Even if it is not always clearly stated, I think they don’t believe that ASI should never be developed, or that it is impossible in principle to solve alignment. Their major statement is that we are much farther from solving alignment than from building a potentially uncontrollable AI, so we need to stop trying to build it.
Their suggested measures in part III (whether helpful/feasible or not) are meant to prevent ASI under the current paradigms, with the current approaches to alignment. Given the time gap, I don’t think this matters very much, though—if we can’t prevent ASI from being built as soon as it is technically possible, we won’t be in a world that differs enough from today’s to render the book title wrong.
I don’t think that the authors implied this. Right in the first chapter, they write:
(emphasis by me). Even if it is not always clearly stated, I think they don’t believe that ASI should never be developed, or that it is impossible in principle to solve alignment. Their major statement is that we are much farther from solving alignment than from building a potentially uncontrollable AI, so we need to stop trying to build it.
Their suggested measures in part III (whether helpful/feasible or not) are meant to prevent ASI under the current paradigms, with the current approaches to alignment. Given the time gap, I don’t think this matters very much, though—if we can’t prevent ASI from being built as soon as it is technically possible, we won’t be in a world that differs enough from today’s to render the book title wrong.