This is another one of those AI impacts where something big is waiting to happen, and we are so unprepared that we don’t even have good terminology. (All I can add is that the male counterpart of a waifu is a “husbando” or “husbu”.)
One possible attitude is to say, the era of AI companions is just another transitory stage shortly before the arrival of the biggest AI impact of all, superintelligence, and so one may as well focus on that (e.g. by trying to solve “superalignment”). After superintelligence arrives, if humans and lesser AIs are still around, they will be living however it is that the super-AI thinks they should be living; and if the super-AI was successfully superaligned, all moral and other problems will have been resolved in a better way than any puny human intellect could have conceived.
That’s a possible attitude; if you believe in short timelines to superintelligence, it’s even a defensible attitude. But supposing we put that aside -
Another bigger context for the issue of AI companions, is the general phenomenon of AIs that in some way can function as people, and their impact on societies in which until now, the only people have been humans. One possible impact is replacement, outright substitution of AIs for humans. There is overlap with the fear of losing your job to AI, though only some jobs require an AI that is “also a person”…
Actually, one way to think about the different forms of AI replacement of humans, is just to think about the different roles and relationships that humans have in society. “Our new robot overlords”: that’s AIs replacing political roles. “AI took our jobs”: that’s AI replacing economic roles. AI art and AI science: that’s AI replacing cultural roles. And AI companions: that’s AI replacing emotional, sexual, familial, friendship roles.
So one possible endpoint (from a human perspective) is 100% substitution. The institutions that evolved in human society actually outlive the human race, because all the roles are filled and maintained by AIs instead. Robin Hanson’s world of brain emulations is one version of this, and it seems clear to me that LLM-based agents are another way it could happen.
I’m not aware of any moral, legal, political, or philosophical framework that’s ready for this—either to provide normative advice, or even just good ontological guidance. Should human society allow there to be, AIs that are also people? Can AIs even be people? If AI-people are allowed to exist, or will just inevitably exist, what are their rights, what are their responsibilities? Are they excluded from certain parts of society, and if so, which parts and why? The questions come much more easily than the answers.
if you believe in short timelines to superintelligence
Due to serial speed advantage of AIs, superintelligence is unnecessary for making humanity irrelevant within a few years of the first AGIs capable of autonomous unbounded research. Conversely, without such AGI, the impact on society is going to remain bounded, not overturning everything.
Agreed with the first part of your comment (about superintelligence).
On the second part, I think immediately generalising the discussion about the role of “person” in the society at large is preliminary. I think it’s an extremely important discussion to have, but it doesn’t weigh on whether we should ban healthy under-30′s from using AI partners today.
In general, I’m not a carbon chauvinist.
Let’s imagine a cyberpank scenario: a very advanced AI partner (superhuman on emotional and intellectual levels) enters a relationship with a human and they decide to have a child, with the help of donor sperm (if the human in the couple is a woman) or with the help of a donor egg and gestation in an artificial womb (if the human in the couple is a man); the sexual orientation of the human or the AI in the couple doesn’t matter.
I probably wouldn’t be opposed to this. But we can discuss permitting these family arrangements after all the necessary technologies (AIs and artificial wombs) have matured sufficiently. So, I’m not against human—AI relationships in principle, but I think that the current wave of AI romance startups has nothing to do with crafting meaning and societal good.
This is another one of those AI impacts where something big is waiting to happen, and we are so unprepared that we don’t even have good terminology. (All I can add is that the male counterpart of a waifu is a “husbando” or “husbu”.)
One possible attitude is to say, the era of AI companions is just another transitory stage shortly before the arrival of the biggest AI impact of all, superintelligence, and so one may as well focus on that (e.g. by trying to solve “superalignment”). After superintelligence arrives, if humans and lesser AIs are still around, they will be living however it is that the super-AI thinks they should be living; and if the super-AI was successfully superaligned, all moral and other problems will have been resolved in a better way than any puny human intellect could have conceived.
That’s a possible attitude; if you believe in short timelines to superintelligence, it’s even a defensible attitude. But supposing we put that aside -
Another bigger context for the issue of AI companions, is the general phenomenon of AIs that in some way can function as people, and their impact on societies in which until now, the only people have been humans. One possible impact is replacement, outright substitution of AIs for humans. There is overlap with the fear of losing your job to AI, though only some jobs require an AI that is “also a person”…
Actually, one way to think about the different forms of AI replacement of humans, is just to think about the different roles and relationships that humans have in society. “Our new robot overlords”: that’s AIs replacing political roles. “AI took our jobs”: that’s AI replacing economic roles. AI art and AI science: that’s AI replacing cultural roles. And AI companions: that’s AI replacing emotional, sexual, familial, friendship roles.
So one possible endpoint (from a human perspective) is 100% substitution. The institutions that evolved in human society actually outlive the human race, because all the roles are filled and maintained by AIs instead. Robin Hanson’s world of brain emulations is one version of this, and it seems clear to me that LLM-based agents are another way it could happen.
I’m not aware of any moral, legal, political, or philosophical framework that’s ready for this—either to provide normative advice, or even just good ontological guidance. Should human society allow there to be, AIs that are also people? Can AIs even be people? If AI-people are allowed to exist, or will just inevitably exist, what are their rights, what are their responsibilities? Are they excluded from certain parts of society, and if so, which parts and why? The questions come much more easily than the answers.
Due to serial speed advantage of AIs, superintelligence is unnecessary for making humanity irrelevant within a few years of the first AGIs capable of autonomous unbounded research. Conversely, without such AGI, the impact on society is going to remain bounded, not overturning everything.
Agreed with the first part of your comment (about superintelligence).
On the second part, I think immediately generalising the discussion about the role of “person” in the society at large is preliminary. I think it’s an extremely important discussion to have, but it doesn’t weigh on whether we should ban healthy under-30′s from using AI partners today.
In general, I’m not a carbon chauvinist.
Let’s imagine a cyberpank scenario: a very advanced AI partner (superhuman on emotional and intellectual levels) enters a relationship with a human and they decide to have a child, with the help of donor sperm (if the human in the couple is a woman) or with the help of a donor egg and gestation in an artificial womb (if the human in the couple is a man); the sexual orientation of the human or the AI in the couple doesn’t matter.
I probably wouldn’t be opposed to this. But we can discuss permitting these family arrangements after all the necessary technologies (AIs and artificial wombs) have matured sufficiently. So, I’m not against human—AI relationships in principle, but I think that the current wave of AI romance startups has nothing to do with crafting meaning and societal good.