I was talking with some people yesterday whom I accused of competing to espouse middling p(doom)s. One of them was talking about Aaronson’s Faust parameter [ i.e. the p(doom), assuming “everything goes perfect” if ¬doom, at which you press the button and release superintelligent AI right now ]. And they had what I think was a good question: In what year do we foresee longevity escape velocity, assuming the AInotkilleveryoneist agenda succeeds and superintelligence is forestalled for decades?
The appropriate countervailing challenge question is: What is one plausible story for how a by-chance friendly ASI invents immortality within two years or whatever of its creation, while staying harmless to humanity? What is the tech tree, how does it traverse this tree and what are the guardrails keeping it from going off on some exciting [ what is effectively to a human ] pathology-gain-of-function tangent along the way?
I was talking with some people yesterday whom I accused of competing to espouse middling p(doom)s. One of them was talking about Aaronson’s Faust parameter [ i.e. the p(doom), assuming “everything goes perfect” if ¬doom, at which you press the button and release superintelligent AI right now ]. And they had what I think was a good question: In what year do we foresee longevity escape velocity, assuming the AInotkilleveryoneist agenda succeeds and superintelligence is forestalled for decades?
The appropriate countervailing challenge question is: What is one plausible story for how a by-chance friendly ASI invents immortality within two years or whatever of its creation, while staying harmless to humanity? What is the tech tree, how does it traverse this tree and what are the guardrails keeping it from going off on some exciting [ what is effectively to a human ] pathology-gain-of-function tangent along the way?