She asks why the book doesn’t spend more time explaining why an intelligence explosion is likely to occur. The answer is the book is explicitly arguing a conditional, what happens if it does occur, and acknowledges that it may or may not occur, or occur on any given time frame.
Is it your claim here that the book is arguing the conditional: “If there’s an intelligence explosion, then everyone dies?” If so, then it seems completely valid to counterargue: “Well, an intelligence explosion is unlikely to occur, so who cares?”
Zvi’s summary here isn’t quite right, the book is arguing the conditional “if we end up with overwhelming superintelligence (via explosion or otherwise) then everyone dies.”
And yep, it is a fine counterresponse to say “but there will not be overwhelming superintelligence.” (I mean, seems false and I have no idea why you believe that, but, yep, if that were true, then people would not (necessarily) die for the reasons the book warns about)
Is it your claim here that the book is arguing the conditional: “If there’s an intelligence explosion, then everyone dies?” If so, then it seems completely valid to counterargue: “Well, an intelligence explosion is unlikely to occur, so who cares?”
Zvi’s summary here isn’t quite right, the book is arguing the conditional “if we end up with overwhelming superintelligence (via explosion or otherwise) then everyone dies.”
And yep, it is a fine counterresponse to say “but there will not be overwhelming superintelligence.” (I mean, seems false and I have no idea why you believe that, but, yep, if that were true, then people would not (necessarily) die for the reasons the book warns about)