In Matrix, the role of humans was quite similar to the role of mitochondria. (Except, it does not make sense.)
I imagine that at the beginning, humans could be useful to the young AIs which would excel at some skills but fail at others. (One important role would be providing a human “face” in interaction with humans who don’t like AIs.) However, that usefulness would only be temporary.
An eukaryotic cell cannot find a short-term replacement for mitochondria, and in evolution the long-term does not happen without the short-term. An intelligent designer—such as a self-improving AI—could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.
On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans. So the humans will become an equivalent of cage-bred chickens.
An intelligent designer—such as a self-improving AI—could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.
Or they could make improvements. (Breeding/genetic mod.s, etc.)
On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans.
I’m not clear on what scenario would give rise to this—the knowledge that a meteor is headed to Earth, and is going to have a very strong negative impact on its current course seems like it could induce much pressure while not (immediately) constraining resources a lot (aside from time, etc.). Cannot afford to R&D new luxuries, or maintain ones that rely on it, etc., perhaps.
In Matrix, the role of humans was quite similar to the role of mitochondria. (Except, it does not make sense.)
I imagine that at the beginning, humans could be useful to the young AIs which would excel at some skills but fail at others. (One important role would be providing a human “face” in interaction with humans who don’t like AIs.) However, that usefulness would only be temporary.
An eukaryotic cell cannot find a short-term replacement for mitochondria, and in evolution the long-term does not happen without the short-term. An intelligent designer—such as a self-improving AI—could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.
On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans. So the humans will become an equivalent of cage-bred chickens.
Or they could make improvements. (Breeding/genetic mod.s, etc.)
I’m not clear on what scenario would give rise to this—the knowledge that a meteor is headed to Earth, and is going to have a very strong negative impact on its current course seems like it could induce much pressure while not (immediately) constraining resources a lot (aside from time, etc.). Cannot afford to R&D new luxuries, or maintain ones that rely on it, etc., perhaps.