For the past two-plus years I’ve been writing hard science fiction novellas and vignettes about social robots called Companions. They are based in this and the next two centuries.
After a thirty year career in IT I am now retired, write as a hobby and self-publish. I try to write 300-1k words per day and have written seven novellas and forty vignettes.
As a way to address my own social isolation at the time, about ten years ago I also researched and created a social activities group which I ran successfully for two years. Info is here…
I agree with your views and here are my own responses.
Most people will not ignore real relationships. We are “wetware” not “software” and there are significant elements missing from relationships with AI. My fictional Companions are embodied, physically barely distinguishable from humans and are artificial general intelligence (AGI) or are fully conscious. Due to their technology they are phenomenally skilled at interpersonal communications. I expect some people would prefer this kind of relationship but not everybody. As I suggest in my stories it would be just another color on the spectrum of human relationships.
Also I think Dr. Kate Darling’s view of things is important to keep in mind. Humans have had all kinds of pets and animals as companions or co-workers for millennia. As you point out we also have all kinds of relationships with other people but each of these relationships, be it with animals or humans, is distinct.
I think negative views of chatbots underestimate both the future ability of AI and human nature. I believe chatbots have the potential to become “real” in their intentions and behavior. With advanced sensors for things like vocal changes, facial micro-expressions and detailed data about our behavior AI will know us better than we know ourselves. People anthropomorphize in endless ways and many will benefit from “on-screen” relationships whether the avatars are perceived as friends, romantic partners or therapeutic counselors.
Most concerns seem to arise from chatbots as they are now but they will evolve significantly in the coming years and decades. Certainly they can be exploited like any technology but those issues will be addressed over time just as the rest of our legal and ethical framework addressed every other technology. Human nature is always a two sided coin.
In my stories, many of which focus on social issues or ethics and justice, most of the concerns regarding “chatbots” have long since been addressed by law and AI is now an anti-corruption layer of government dealing with all public and private organizations. Screen based, holographic or embodied companions are as common as cell phones. Contrary to what is popular my stories contain no sex or violence and very little conflict other than internal. In my vignettes (short stories of around 1k words) I mostly focus on some issue that might arise in a world where AI has become much more social than it currently is: an AI working as an HR manager, a doctor or a detective; an implanted or external AI helping neurodiverse individuals; AI as friends, therapists or romantic partners.
The longer novellas focus on larger issues and the AI are simply characters so those may not be of interest to you.
I have written poems, songs and stories since childhood so I can vouch for most of what you say about writers and characters. There are in general two kinds of writers however and I think that may effect the “independent agency” issue. Some writers plan their stories but others, including famous authors like Stephen King, are “discovery writers”. Discovery writers do not plan their stories, instead they create their characters and the situation and let the characters then decide and dictate everything. I imagine, although I don’t know for sure, that planners would be less inclined to the “independent agency” effect. As a discovery writer myself I can tell you I can tell you that I depend entirely upon it. Characters do or say things not because of any plan I have but because it is what they would do as independent agents. I just write it down.
Not sure I’ve added anything to your argument other than support but hopefully I’ve added some food for thought on the subject.
Humans have had all kinds of pets and animals as companions or co-workers for millennia.
Crucial disanalogies of AI partners from pets and animal companions (as well as from porn, addictive junk food, gambling, casual dating/hookups, simping on OnlyFans, etc.) are
1) people who have pets and animal companions (and even love them!) still usually seek romantic relationships with other humans. People who fell in love with AI partners, and have virtual and even physical sex with them (e.g., with a sex doll and a VR headset that projects the visual features of the AI girlfriend on the doll), usually won’t seek real human relationships.
2) People who are in relationship with AIs will spend cognitive and emotional effort that usually goes towards communicating human partners, forming and spreading memes that build the fabric of society, on communicating with AI partners, which will be a “wasted” effort from the societal and cultural points of view, unless AIs are full members of the society themselves, as I pointed in another comment. But current AI partners are not there. For an AI to be a full member of society that learns alongside people and participates in forming and disseminating memes in an intelligent way, the AI should probably already be an AGI, and have legal rights similar to those of humans.
People who fell in love with AI partners, and have virtual and even physical sex with them (e.g., with a sex doll and a VR headset that projects the visual features of the AI girlfriend on the doll), usually won’t seek real human relationships.
I don’t think we yet have enough data to say anything about what the usual case is like (I haven’t seen any representative studies of the relationship behavior of people falling in love with AI partners).
Disclaimer: I’m not a professional psychologist and mostly not familiar with literature, but the following propositions seem at least highly likely to me:
In human psychology, there is “romantic love” type of emotional relationship, which is exclusive in most people (serial monogamy), with only a minority of people finding themselves being truly in love with two other people simultaneously.
AI girlfriends who look like the hottest girls in the world won’t occupy any other niche in men’s psychology than “romantic love”. It’s not “pet love” or platonic love towards friends and family members. Human psychology won’t “spun up” new distinct type of love relationship because there is no driving force for this, except for knowing that AI partner is “not real”, which I think is a rather weak deterrent (moreover, this fact could even be seriously questioned soon).
There is a simple proof for the above: people fall into genuine romantic love very easily and very quickly from chat and (video) calls alone, “flesh and blood” meeting not required. To most people, even having only a chat and a few photographs of the person is enough to be able to fall in love, even phone calls or videos are not required. To some people, even having chat alone (or, in the old times, exchanging written letters), without even having a single photograph, is enough to fall in love with a person and to dream of nothing except meeting that person.
Thus, falling in love as in movie “Her” is not just “hypothetical”, or applies to tiny slice of weirdos, it’s rather plausible from the historical perspective, when falling in love upon exchanging texts alone was at least relatively common. Note that with AI partners, this will soon be exacerbated by that fact that they will be completely unique, in terms of their personality (character.ai), looks (simps.ai), and voice, generated specifically for the user. This will add the feeling of exclusivity and will make falling in love with these AIs psychologically much more “justifiable” for people (as people will justify it for themselves in their mind).
People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love (The Titanic style). Or, this could be clash with critical life decisions, about country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping love towards those who they were forced to leave for the rest of their lifes, even after they find new love. This experience may sound beautiful and dramatic but it’s literally zero fun and people would prefer not to go through this. So, it’s likely that at least for sizeable part of AI partner userbase attempts to “abandon” it and find a human partner instead will be like that. Effectively, the reason is similar to what often happens in human pairs: child-free falls in love with a person who wants kids but they can’t “convince” each other. Or, one of the partner can’t have kids for medical reasons.
Which of the above points seem less than highly likely to you?
I think these are generally reasonable, but that the prevalence of polygamous societies is an indication that the first point is significantly culturally influenced, e.g. Wikipedia:
Worldwide, different societies variously encourage, accept or outlaw polygamy. In societies which allow or tolerate polygamy, polygyny is the accepted form in the vast majority of cases. According to the Ethnographic Atlas Codebook, of 1,231 societies noted between from 1960 to 1980, 588 had frequent polygyny, 453 had occasional polygyny, 186 were monogamous, and 4 had polyandry[5] – although more recent research found some form of polyandry in 53 communities, which is more common than previously thought.[6] In cultures which practice polygamy, its prevalence among that population often correlates with social class and socioeconomic status.[7] Polygamy (taking the form of polygyny) is most common in a region known as the “polygamy belt” in West Africa and Central Africa, with the countries estimated to have the highest polygamy prevalence in the world being Burkina Faso, Mali, Gambia, Niger and Nigeria.[8]
1) people who have pets and animal companions (and even love them!) still usually seek romantic relationships with other humans
Do they?
I mean, of course that pet lovers still usually seek intimate relationships with other humans. But I think there’s a pretty strong evidence that loving your pet too much will distract you a lot from having children. Also, it’s not uncommon to break up with your partner because your partner does not love pets as much as you (don’t tell me that you’ve never heard about the “it’s me or the dog” ultimatum).
I think there’s a pretty strong evidence that loving your pet too much will distract you a lot from having children.
Maybe? That article only seemed to say that many people own pets and don’t have children, but that doesn’t show that those people would have children if they couldn’t have a pet. After all, there are also many people who have neither children nor pets.
I’ve linked the first article I found after a 3-seconds search, since I assume basically everyone to already have a lot of anecdotal evidence about people spending insane amounts of time taking care of the pet (usually a dog). For example, in recent years I’ve already seen several times people walking their dog in a stroller, in such a way that from a distance you’d probably assume there’s a human baby inside. If that doesn’t scream “I’m using a dog as a substitute for a child”, I don’t know what does.
For example, in recent years I’ve already seen several times people walking their dog in a stroller, in such a way that from a distance you’d probably assume there’s a human baby inside.
I guess this is partly a cultural thing, I don’t recall ever witnessing that in Finland.
Of course, it’s all a matter of degrees, some people channel their love to pets alone, some to partners and pets bit not children, etc. I was simplifying.
I don’t think this affects the high-level points I’m making: widespread AI partners will have rather catastrophic effect on the society, unless we bet on a relatively quick transformation into even weirder societal states, with AGIs as full members of societies (including as romantic partners), BCI, mind uploads, Chalmers’ experience machines, etc.
However, AI partners don’t appear as net positive without assuming all these downstream changes, and there will be no problem with introducing AI partners only when these downstream advances become available (there is a counterargument here that there is some benefit to letting society “adjust” to new arrangements, but it doesn’t make sense in this context, given the expected net negativity of this adjustment and maybe even “nuclear energy effect” of bad first experiences). Therefore, introducing future civilisational transformations into the argument don’t bail out AI partners as permissible businesses, as of 2023.
For the past two-plus years I’ve been writing hard science fiction novellas and vignettes about social robots called Companions. They are based in this and the next two centuries.
After a thirty year career in IT I am now retired, write as a hobby and self-publish. I try to write 300-1k words per day and have written seven novellas and forty vignettes.
As a way to address my own social isolation at the time, about ten years ago I also researched and created a social activities group which I ran successfully for two years. Info is here…
https://socialwellness.wordpress.com/
I agree with your views and here are my own responses.
Most people will not ignore real relationships. We are “wetware” not “software” and there are significant elements missing from relationships with AI. My fictional Companions are embodied, physically barely distinguishable from humans and are artificial general intelligence (AGI) or are fully conscious. Due to their technology they are phenomenally skilled at interpersonal communications. I expect some people would prefer this kind of relationship but not everybody. As I suggest in my stories it would be just another color on the spectrum of human relationships.
Also I think Dr. Kate Darling’s view of things is important to keep in mind. Humans have had all kinds of pets and animals as companions or co-workers for millennia. As you point out we also have all kinds of relationships with other people but each of these relationships, be it with animals or humans, is distinct.
http://www.katedarling.org/speakingpress
I think negative views of chatbots underestimate both the future ability of AI and human nature. I believe chatbots have the potential to become “real” in their intentions and behavior. With advanced sensors for things like vocal changes, facial micro-expressions and detailed data about our behavior AI will know us better than we know ourselves. People anthropomorphize in endless ways and many will benefit from “on-screen” relationships whether the avatars are perceived as friends, romantic partners or therapeutic counselors.
Most concerns seem to arise from chatbots as they are now but they will evolve significantly in the coming years and decades. Certainly they can be exploited like any technology but those issues will be addressed over time just as the rest of our legal and ethical framework addressed every other technology. Human nature is always a two sided coin.
In my stories, many of which focus on social issues or ethics and justice, most of the concerns regarding “chatbots” have long since been addressed by law and AI is now an anti-corruption layer of government dealing with all public and private organizations. Screen based, holographic or embodied companions are as common as cell phones. Contrary to what is popular my stories contain no sex or violence and very little conflict other than internal. In my vignettes (short stories of around 1k words) I mostly focus on some issue that might arise in a world where AI has become much more social than it currently is: an AI working as an HR manager, a doctor or a detective; an implanted or external AI helping neurodiverse individuals; AI as friends, therapists or romantic partners.
If you think they may be of interest to you they are found here...
https://acompanionanthology.wordpress.com/
The longer novellas focus on larger issues and the AI are simply characters so those may not be of interest to you.
I have written poems, songs and stories since childhood so I can vouch for most of what you say about writers and characters. There are in general two kinds of writers however and I think that may effect the “independent agency” issue. Some writers plan their stories but others, including famous authors like Stephen King, are “discovery writers”. Discovery writers do not plan their stories, instead they create their characters and the situation and let the characters then decide and dictate everything. I imagine, although I don’t know for sure, that planners would be less inclined to the “independent agency” effect. As a discovery writer myself I can tell you I can tell you that I depend entirely upon it. Characters do or say things not because of any plan I have but because it is what they would do as independent agents. I just write it down.
Not sure I’ve added anything to your argument other than support but hopefully I’ve added some food for thought on the subject.
Crucial disanalogies of AI partners from pets and animal companions (as well as from porn, addictive junk food, gambling, casual dating/hookups, simping on OnlyFans, etc.) are
1) people who have pets and animal companions (and even love them!) still usually seek romantic relationships with other humans. People who fell in love with AI partners, and have virtual and even physical sex with them (e.g., with a sex doll and a VR headset that projects the visual features of the AI girlfriend on the doll), usually won’t seek real human relationships.
2) People who are in relationship with AIs will spend cognitive and emotional effort that usually goes towards communicating human partners, forming and spreading memes that build the fabric of society, on communicating with AI partners, which will be a “wasted” effort from the societal and cultural points of view, unless AIs are full members of the society themselves, as I pointed in another comment. But current AI partners are not there. For an AI to be a full member of society that learns alongside people and participates in forming and disseminating memes in an intelligent way, the AI should probably already be an AGI, and have legal rights similar to those of humans.
I don’t think we yet have enough data to say anything about what the usual case is like (I haven’t seen any representative studies of the relationship behavior of people falling in love with AI partners).
Disclaimer: I’m not a professional psychologist and mostly not familiar with literature, but the following propositions seem at least highly likely to me:
In human psychology, there is “romantic love” type of emotional relationship, which is exclusive in most people (serial monogamy), with only a minority of people finding themselves being truly in love with two other people simultaneously.
AI girlfriends who look like the hottest girls in the world won’t occupy any other niche in men’s psychology than “romantic love”. It’s not “pet love” or platonic love towards friends and family members. Human psychology won’t “spun up” new distinct type of love relationship because there is no driving force for this, except for knowing that AI partner is “not real”, which I think is a rather weak deterrent (moreover, this fact could even be seriously questioned soon).
There is a simple proof for the above: people fall into genuine romantic love very easily and very quickly from chat and (video) calls alone, “flesh and blood” meeting not required. To most people, even having only a chat and a few photographs of the person is enough to be able to fall in love, even phone calls or videos are not required. To some people, even having chat alone (or, in the old times, exchanging written letters), without even having a single photograph, is enough to fall in love with a person and to dream of nothing except meeting that person.
Thus, falling in love as in movie “Her” is not just “hypothetical”, or applies to tiny slice of weirdos, it’s rather plausible from the historical perspective, when falling in love upon exchanging texts alone was at least relatively common. Note that with AI partners, this will soon be exacerbated by that fact that they will be completely unique, in terms of their personality (character.ai), looks (simps.ai), and voice, generated specifically for the user. This will add the feeling of exclusivity and will make falling in love with these AIs psychologically much more “justifiable” for people (as people will justify it for themselves in their mind).
People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love (The Titanic style). Or, this could be clash with critical life decisions, about country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping love towards those who they were forced to leave for the rest of their lifes, even after they find new love. This experience may sound beautiful and dramatic but it’s literally zero fun and people would prefer not to go through this. So, it’s likely that at least for sizeable part of AI partner userbase attempts to “abandon” it and find a human partner instead will be like that. Effectively, the reason is similar to what often happens in human pairs: child-free falls in love with a person who wants kids but they can’t “convince” each other. Or, one of the partner can’t have kids for medical reasons.
Which of the above points seem less than highly likely to you?
I think these are generally reasonable, but that the prevalence of polygamous societies is an indication that the first point is significantly culturally influenced, e.g. Wikipedia:
Do they?
I mean, of course that pet lovers still usually seek intimate relationships with other humans. But I think there’s a pretty strong evidence that loving your pet too much will distract you a lot from having children. Also, it’s not uncommon to break up with your partner because your partner does not love pets as much as you (don’t tell me that you’ve never heard about the “it’s me or the dog” ultimatum).
Maybe? That article only seemed to say that many people own pets and don’t have children, but that doesn’t show that those people would have children if they couldn’t have a pet. After all, there are also many people who have neither children nor pets.
I’ve linked the first article I found after a 3-seconds search, since I assume basically everyone to already have a lot of anecdotal evidence about people spending insane amounts of time taking care of the pet (usually a dog). For example, in recent years I’ve already seen several times people walking their dog in a stroller, in such a way that from a distance you’d probably assume there’s a human baby inside. If that doesn’t scream “I’m using a dog as a substitute for a child”, I don’t know what does.
I guess this is partly a cultural thing, I don’t recall ever witnessing that in Finland.
Of course, it’s all a matter of degrees, some people channel their love to pets alone, some to partners and pets bit not children, etc. I was simplifying.
I don’t think this affects the high-level points I’m making: widespread AI partners will have rather catastrophic effect on the society, unless we bet on a relatively quick transformation into even weirder societal states, with AGIs as full members of societies (including as romantic partners), BCI, mind uploads, Chalmers’ experience machines, etc.
However, AI partners don’t appear as net positive without assuming all these downstream changes, and there will be no problem with introducing AI partners only when these downstream advances become available (there is a counterargument here that there is some benefit to letting society “adjust” to new arrangements, but it doesn’t make sense in this context, given the expected net negativity of this adjustment and maybe even “nuclear energy effect” of bad first experiences). Therefore, introducing future civilisational transformations into the argument don’t bail out AI partners as permissible businesses, as of 2023.