I strongly upvoted this post, not because I agree with the premises or conclusions, but because I think there is a despair that comes with inhabiting a community with some very vocal short-timeliners, and if you feel that despair, these are the sort of questions that you ask, as an ethical and intelligent person. But you have to keep on gaming it all the way down; you can’t let the despair stop you at the bird’s-eye view, although I wouldn’t blame a given person for letting it anyway.
There is some chance that your risk assessment is wrong, which your probabilistic world-model should include. It’s the same thing that should have occurred to the Harold Camping listeners.
The average AI researcher here would probably assign a small slice of probability mass to the set of nested conditions that would actually implicate the duty not to bring children into the world:
AGI is created during your kid’s life.
AGI isn’t safe.
AGI isn’t friendly, either.
AGI makes the world substantially worse for humans during your kid’s life. (Only here, in my opinion, do we have to start meaningfully engaging with the probabilities.)
AGI kills all humans during your kid’s life. (The more-lucid thinkers here see the AGI’s dominant strategy as, Kill all humans ASAP and spending as few resources as possible. This militates for quick and unexpected, certainly quicker and more unexpected than COVID.)
AGI kills all humans violently during your kid’s life. (There’s no an angle for the AGI here, so why would it do it? Do you meaningfully expect that this might happen?)
AGI kills all humans violently after torturing them during your kid’s life. (Same, barring a few infamous cognitohazards.)
From the individual perspective, getting nonviolently and quickly killed by an AGI doesn’t seem much worse or better to me than suddenly dying because e.g. a baseball hit you in the sternum (except for the fleeting moment of sadness that humanity failed because we were the first Earth organism to invent AGI but were exactly as smart as we needed to be to do that and not one iota smarter). There are background risks to being alive.
I strongly upvoted this post, not because I agree with the premises or conclusions, but because I think there is a despair that comes with inhabiting a community with some very vocal short-timeliners, and if you feel that despair, these are the sort of questions that you ask, as an ethical and intelligent person. But you have to keep on gaming it all the way down; you can’t let the despair stop you at the bird’s-eye view, although I wouldn’t blame a given person for letting it anyway.
There is some chance that your risk assessment is wrong, which your probabilistic world-model should include. It’s the same thing that should have occurred to the Harold Camping listeners.
The average AI researcher here would probably assign a small slice of probability mass to the set of nested conditions that would actually implicate the duty not to bring children into the world:
AGI is created during your kid’s life.
AGI isn’t safe.
AGI isn’t friendly, either.
AGI makes the world substantially worse for humans during your kid’s life. (Only here, in my opinion, do we have to start meaningfully engaging with the probabilities.)
AGI kills all humans during your kid’s life. (The more-lucid thinkers here see the AGI’s dominant strategy as, Kill all humans ASAP and spending as few resources as possible. This militates for quick and unexpected, certainly quicker and more unexpected than COVID.)
AGI kills all humans violently during your kid’s life. (There’s no an angle for the AGI here, so why would it do it? Do you meaningfully expect that this might happen?)
AGI kills all humans violently after torturing them during your kid’s life. (Same, barring a few infamous cognitohazards.)
From the individual perspective, getting nonviolently and quickly killed by an AGI doesn’t seem much worse or better to me than suddenly dying because e.g. a baseball hit you in the sternum (except for the fleeting moment of sadness that humanity failed because we were the first Earth organism to invent AGI but were exactly as smart as we needed to be to do that and not one iota smarter). There are background risks to being alive.