This seems to be a bizarre mangling of several different scenarios.
Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off.
Many humans will want to avoid death as long as they can, and to have children. Most humans will not think “robots do all that boring factory work, therefore I’m useless therefore kill myself now”. If the robots also do nappy changing and similar, it might encourage more people to be parents. And there are some humans that want humanity to continue, some that want to be immortal.
Having been trained to understand our human needs and human nature in minute detail, the AI we leave behind will be the sum total of all human values, desires, knowledge and aspiration.
I think that this is not nessesarily true. There are desings of AI that don’t have human values. Its possible for the AI to understand human values in great detail but still care about something else. This is one of the problems Miri is trying to avoid.
At that point, or soon thereafter, in the perfect world we can imagine all humans being provided all the basic needs without needing to work.
There is some utopian assumption here. Presumably the AI’s have a lot of power at this point. Why are they using this power to create the bargin basement utopia you described. What stops an AI from indiscriminately slaughtering humans.
Also in the last paragraphs, I feel you are assuming the AI is rather humanlike. Many AI designs will be seriously alien. They do not think like you. There is no reason to assume they would be anything recognisably conscious.
And since by then the AI-economy will have already had a long run of human-supervised self-sufficiency, there is no reason to fear that without our oversight the robots left behind will run the world any worse than we can.
A period of supervision doesn’t prove much. There are designs of AI that behave when the humans are watching and then misbehave when the humans aren’t watching. Maybe we have trained them to make good responsible use of the tech that existed at training time, but if they invent new different tech, they use it in a way we wouldn’t want.
It really isn’t clear what is supposed to be happening here. Did we build an AI that genuinely had our best interests at heart, but it turned out immortality was too hard, and the humans were having too much fun to reproduce? (Even though reproducing is generally considered to be quite fun) Or were these AI’s delibirately trying to get rid of humanity. In which case why didn’t all humans drop dead the moment the AI got access to serious weaponry?
yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.
This seems to be a bizarre mangling of several different scenarios.
Many humans will want to avoid death as long as they can, and to have children. Most humans will not think “robots do all that boring factory work, therefore I’m useless therefore kill myself now”. If the robots also do nappy changing and similar, it might encourage more people to be parents. And there are some humans that want humanity to continue, some that want to be immortal.
I think that this is not nessesarily true. There are desings of AI that don’t have human values. Its possible for the AI to understand human values in great detail but still care about something else. This is one of the problems Miri is trying to avoid.
There is some utopian assumption here. Presumably the AI’s have a lot of power at this point. Why are they using this power to create the bargin basement utopia you described. What stops an AI from indiscriminately slaughtering humans.
Also in the last paragraphs, I feel you are assuming the AI is rather humanlike. Many AI designs will be seriously alien. They do not think like you. There is no reason to assume they would be anything recognisably conscious.
A period of supervision doesn’t prove much. There are designs of AI that behave when the humans are watching and then misbehave when the humans aren’t watching. Maybe we have trained them to make good responsible use of the tech that existed at training time, but if they invent new different tech, they use it in a way we wouldn’t want.
It really isn’t clear what is supposed to be happening here. Did we build an AI that genuinely had our best interests at heart, but it turned out immortality was too hard, and the humans were having too much fun to reproduce? (Even though reproducing is generally considered to be quite fun) Or were these AI’s delibirately trying to get rid of humanity. In which case why didn’t all humans drop dead the moment the AI got access to serious weaponry?
yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.