yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.
yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.