The story has problems, and it’s not clear how it’s meant to be taken.
Way 1: we should believe the SAI, being a SAI, and so everyone will in fact be happier within a week. This creates cognitive dissonance, what with the scenario seeming flawed to us, and putting us in a position of rejecting a scenario that makes us happier.
Way 2: we should trust our reason, and evaluate the scenario on its own merits. This creates the cognitive dissonance of the SAI being really stupid. Yeah, being immortal and having a nice companion and good life support and protection is good, but it’s a failed utopia because it’s trivially improvable. The fridge logic is strong in this one, and much has been pointed out already: gays, opposite-sex friends, family. More specific than family: children. What happened to the five year olds in this scenario?
The AI was apparently programmed by a man who had no close female friends, no children, and was not close to his mother. Otherwise the idea that either catgirls or Belldandies should lead to a natural separation of the sexes would not occur. (Is the moral that such people should not be allowed to define gods? Duh.) If I had a catgirl/non-sentient sexbot, that would not make me spend less time with true female friends, or stop calling my mother (were she still alive.) Catgirl doesn’t play Settler of Catan or D&D or talk about politics. A Belldandy might, in the sense that finding a perfect mate often leads to spending less time with friends, but it still needn’t mean being happy with them being cut off, or being unreceptive to meeting new friends of either sex.
So yeah, it’s a pretty bad utopia, defensible only in the “hey, not dying or physically starving” way. But it’s implausibly bad, because it could be so much better by doing less work: immortalize people on Earth, angelnet Earth, give people the option of summoning an Idealized Companion. Your AI had to go to more effort for less result, and shouldn’t have followed this path if it had any consultation with remotely normal people. (Where are the children?)
I think Way 2 was what the author intended—it’s not actually meant to be a true utopia. Thus “failed utopia”.
But the story raises a couple interesting questions, that I don’t notice an answer to.
How did the AI do all this, given the confines of human technology at the time it was set?
And if the AI could do it… what’s stopping a human from doing the same?
I envision someone having those precise thoughts on either Mars or Venus, and (either swiftly or gradually) discovering the methods needed to alter reality the same way the AI did. Soon, everything is set, if not “right”, at the very least back to “normal”.
… although perhaps the “perfect” mates are given their own distant world to live on, and grow without worry of human intervention anytime soon.
… it probably says something about me that I’d also, if I were this person, want to restore the AI to “life” just to trap it in a distant prison from which it can observe humanity, but not interact with anything… as a form of poetic justice for the distant prisons it tried to place humanity within.
Of course, then you’d just have lots of people throwing up on the sands of Earth, because setting everything “back to normal” involves separating them from mates with whom they have been extremely happy.
(Presumably you’d also have a lot of unhappy nonhumans on that distant world, for the same reasons. Assuming the mates really are nonhuman, which is to say the least not clear to me.)
The story has problems, and it’s not clear how it’s meant to be taken.
Way 1: we should believe the SAI, being a SAI, and so everyone will in fact be happier within a week. This creates cognitive dissonance, what with the scenario seeming flawed to us, and putting us in a position of rejecting a scenario that makes us happier.
Way 2: we should trust our reason, and evaluate the scenario on its own merits. This creates the cognitive dissonance of the SAI being really stupid. Yeah, being immortal and having a nice companion and good life support and protection is good, but it’s a failed utopia because it’s trivially improvable. The fridge logic is strong in this one, and much has been pointed out already: gays, opposite-sex friends, family. More specific than family: children. What happened to the five year olds in this scenario?
The AI was apparently programmed by a man who had no close female friends, no children, and was not close to his mother. Otherwise the idea that either catgirls or Belldandies should lead to a natural separation of the sexes would not occur. (Is the moral that such people should not be allowed to define gods? Duh.) If I had a catgirl/non-sentient sexbot, that would not make me spend less time with true female friends, or stop calling my mother (were she still alive.) Catgirl doesn’t play Settler of Catan or D&D or talk about politics. A Belldandy might, in the sense that finding a perfect mate often leads to spending less time with friends, but it still needn’t mean being happy with them being cut off, or being unreceptive to meeting new friends of either sex.
So yeah, it’s a pretty bad utopia, defensible only in the “hey, not dying or physically starving” way. But it’s implausibly bad, because it could be so much better by doing less work: immortalize people on Earth, angelnet Earth, give people the option of summoning an Idealized Companion. Your AI had to go to more effort for less result, and shouldn’t have followed this path if it had any consultation with remotely normal people. (Where are the children?)
Actually she probably can.
I think Way 2 was what the author intended—it’s not actually meant to be a true utopia. Thus “failed utopia”.
But the story raises a couple interesting questions, that I don’t notice an answer to.
How did the AI do all this, given the confines of human technology at the time it was set?
And if the AI could do it… what’s stopping a human from doing the same?
I envision someone having those precise thoughts on either Mars or Venus, and (either swiftly or gradually) discovering the methods needed to alter reality the same way the AI did. Soon, everything is set, if not “right”, at the very least back to “normal”.
… although perhaps the “perfect” mates are given their own distant world to live on, and grow without worry of human intervention anytime soon.
… it probably says something about me that I’d also, if I were this person, want to restore the AI to “life” just to trap it in a distant prison from which it can observe humanity, but not interact with anything… as a form of poetic justice for the distant prisons it tried to place humanity within.
Of course, then you’d just have lots of people throwing up on the sands of Earth, because setting everything “back to normal” involves separating them from mates with whom they have been extremely happy.
(Presumably you’d also have a lot of unhappy nonhumans on that distant world, for the same reasons. Assuming the mates really are nonhuman, which is to say the least not clear to me.)