I think you raise a very valid point and I will suggest that it will need to be addressed on multiple levels. Do not expect any technical details here as I am not an academic but a retired person who writes hard science fiction about social robots as a hobby.
With regards to your statement, “We don’t have enough data to be sure this should be regulated” I assume you are referring to technical aspects of AI but in regards to human behavior we have more than enough data – humans will pursue the potential of AI to exploit relationships in every way they can and will do so forever just like they do everything else.
We ourselves are a kind of AI based on, among other things, a general biological rule you might call “Return On Calories Invested” i.e. the fewer calories you invest for the greater return is one of biology’s most important evolutionary forces. Humans of course are the masters of this, science being the prime example, but in our society crime is also a good example of our relentless pursuit of this rule.
Will emotional bonds with language models cause more harm than good? I think we are at the old “Do guns kill people or do people kill people?” question. AI will need to be dealt with in the same way, with laws. However those laws will also evolve in the same way; some constitutional and regulatory laws will be set down as they are doing in Europe and then case law will follow to address each new form of crime that will be invented. The game of keeping up with the bad guys.
I agree with you that emotional attachment is certain to increase. Some of us become attached to a character in a novel, movie or game and miss them afterwards and we have the Waifu phenomenon in Japan. The movie “Her” is I think a thought provoking speculation. For an academic treatment Kate Darling explores this in depth in her book, “The New Breed”. Or you can just watch one of her videos. http://www.katedarling.org/speakingpress
As I write hard science fiction about social robots much of it is about ethics and justice. Although it is mostly behind the scenes and implied that is not always the case. By way of example I’ll direct you to two of my stories. The first is just an excerpt from a much longer story. It explains the thesis of a young woman enrolled in a Masters Of Ethics And Justice In AI program at a fictional institution. I use the term “incarnate” to mean an AI that is legally a citizen with all associated rights and responsibilities. Here is the excerpt…
[BEGIN]
Lyra’s thesis Beyond Companions: Self-Aware Artificial Intelligence and Personal Influence detailed a hypothetical legal case where in the early days of fully self-aware third generation companions (non-self-aware artificial general intelligence Companions being second generation) the Union of West African States had sued the smaller of the big five manufacturers for including behavior that would encourage micro-transactions. The case argued that the companies products exploited their ability to perceive human emotions and character to a much greater degree than people could. It was not a claim based on programming code as it was not possible to make a simple connection between the emergent self of 3G models and their dynamic underlying code. 3G models had to be dealt with by the legal system the same way people were; based on behavior, law, arguments and reasoning.
In Lyra’s thesis the manufacturer argued that their products were incarnate and so the company was not legally responsible for their behavior. The U.W.A.S. argued that if the company could not be held responsible for possible harm caused by their products they should not be allowed to manufacture them. Involving regulatory, consumer, privacy and other areas of law it was a landmark case that would impact the entire industry.
Both sides presented a wide spectrum of legal, ethical and other arguments however the courts final decision favored the union. Lyra’s oral defense was largely centered around the ‘reasons for judgment’ portion of her hypothetical case. She was awarded her Masters degree.
I think this excerpt echoes a real world problem that will arrive very soon – AI writing its own code and the question of who is responsible for what that code does.
Another issue is considered in my short story (1500 words), “Liminal Life”. This is about a person who forms an attachment to their Companion but then can no longer afford to make their lease payments. It is not a crime but you can easily see how this situation, like a drug dependency, could be exploited.
Please note that my stories are not intended as escapism or entertainment. They are reflections on issues and future possibilities. As such a few of them consider how AI might be used as medical devices. For example, Socialware considers how an implant might address Social Communications Disorder and The Great Pretender explores an external version of this. Other stories such as Reminiscing, which is about dementia and Convergence, about a neurodiverse individual, consider other mental health issues. You can find them here – https://acompanionanthology.wordpress.com/table-of-contents-volume-three/
In these stories I speculate on how AI might play a positive role in mental health so I am interested in your future post about the mental health issues that such AIs might cause.
Thank you for your comment and everything you mentioned in it. I am a psychologist entering the field of AI policy-making, and I am starving for content like this
I think you raise a very valid point and I will suggest that it will need to be addressed on multiple levels. Do not expect any technical details here as I am not an academic but a retired person who writes hard science fiction about social robots as a hobby.
With regards to your statement, “We don’t have enough data to be sure this should be regulated” I assume you are referring to technical aspects of AI but in regards to human behavior we have more than enough data – humans will pursue the potential of AI to exploit relationships in every way they can and will do so forever just like they do everything else.
We ourselves are a kind of AI based on, among other things, a general biological rule you might call “Return On Calories Invested” i.e. the fewer calories you invest for the greater return is one of biology’s most important evolutionary forces. Humans of course are the masters of this, science being the prime example, but in our society crime is also a good example of our relentless pursuit of this rule.
Will emotional bonds with language models cause more harm than good? I think we are at the old “Do guns kill people or do people kill people?” question. AI will need to be dealt with in the same way, with laws. However those laws will also evolve in the same way; some constitutional and regulatory laws will be set down as they are doing in Europe and then case law will follow to address each new form of crime that will be invented. The game of keeping up with the bad guys.
I agree with you that emotional attachment is certain to increase. Some of us become attached to a character in a novel, movie or game and miss them afterwards and we have the Waifu phenomenon in Japan. The movie “Her” is I think a thought provoking speculation. For an academic treatment Kate Darling explores this in depth in her book, “The New Breed”. Or you can just watch one of her videos. http://www.katedarling.org/speakingpress
As I write hard science fiction about social robots much of it is about ethics and justice. Although it is mostly behind the scenes and implied that is not always the case. By way of example I’ll direct you to two of my stories. The first is just an excerpt from a much longer story. It explains the thesis of a young woman enrolled in a Masters Of Ethics And Justice In AI program at a fictional institution. I use the term “incarnate” to mean an AI that is legally a citizen with all associated rights and responsibilities. Here is the excerpt…
[BEGIN]
Lyra’s thesis Beyond Companions: Self-Aware Artificial Intelligence and Personal Influence detailed a hypothetical legal case where in the early days of fully self-aware third generation companions (non-self-aware artificial general intelligence Companions being second generation) the Union of West African States had sued the smaller of the big five manufacturers for including behavior that would encourage micro-transactions. The case argued that the companies products exploited their ability to perceive human emotions and character to a much greater degree than people could. It was not a claim based on programming code as it was not possible to make a simple connection between the emergent self of 3G models and their dynamic underlying code. 3G models had to be dealt with by the legal system the same way people were; based on behavior, law, arguments and reasoning.
In Lyra’s thesis the manufacturer argued that their products were incarnate and so the company was not legally responsible for their behavior. The U.W.A.S. argued that if the company could not be held responsible for possible harm caused by their products they should not be allowed to manufacture them. Involving regulatory, consumer, privacy and other areas of law it was a landmark case that would impact the entire industry.
Both sides presented a wide spectrum of legal, ethical and other arguments however the courts final decision favored the union. Lyra’s oral defense was largely centered around the ‘reasons for judgment’ portion of her hypothetical case. She was awarded her Masters degree.
[END]
The excerpt is from https://solveforn.wordpress.com/
I think this excerpt echoes a real world problem that will arrive very soon – AI writing its own code and the question of who is responsible for what that code does.
Another issue is considered in my short story (1500 words), “Liminal Life”. This is about a person who forms an attachment to their Companion but then can no longer afford to make their lease payments. It is not a crime but you can easily see how this situation, like a drug dependency, could be exploited.
https://acompanionanthology.wordpress.com/liminal-life/
Please note that my stories are not intended as escapism or entertainment. They are reflections on issues and future possibilities. As such a few of them consider how AI might be used as medical devices. For example, Socialware considers how an implant might address Social Communications Disorder and The Great Pretender explores an external version of this. Other stories such as Reminiscing, which is about dementia and Convergence, about a neurodiverse individual, consider other mental health issues. You can find them here –
https://acompanionanthology.wordpress.com/table-of-contents-volume-three/
In these stories I speculate on how AI might play a positive role in mental health so I am interested in your future post about the mental health issues that such AIs might cause.
Thank you for your comment and everything you mentioned in it. I am a psychologist entering the field of AI policy-making, and I am starving for content like this