I’m very very skeptical of this idea, as one generally should be of attempts to carve exceptions out of people’s basic rights and freedoms. If you’re wrong, then your policy recommendations would cause a very large amount of damage. This post unfortunately seems to have little discussion of the drawbacks of the proposed policy, only of the benefits. But it would surely have many drawbacks. People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits. On the margin, more people would stay in relationships they’d be better off leaving because they fear being alone and don’t have the backup option of an AI relationship. People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.
Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are “psychologically harmful”, that’s pretty suspicious. Humans tend to have pretty good psychological intuition. And even if people happen to be wrong about what’s good for them in a particular case, taking away their rights should be counted as a very large cost.
You (and/or other people in the “ban AI romantic partners” coalition) should be trying to gather much more information about how this will actually affect people. I.e. running short term experiments, running long term experiments with prediction markets on the outcomes, etc. We need to know if the harms you predict are actually real. This issue is too serious for us to run a decade long prohibition with a “guess we were wrong” at the end of it.
For those readers who hope to make use of AI romantic companions, I do also have some warnings:
You should know in a rough sense how the AI works and the ways in which it’s not a human.
For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they “re-read” everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
A human might think about you even if they’re not actively talking to you, but rather just going about their day. Of course, most of the time they aren’t thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don’t go about their day or have an independent existence at all really, they’re just there to respond to prompts.
In the future, some of these facts may change, the AIs may become more human-like, or at least more agent-like. You should know all such details about your AI companion of choice.
Not your weights, not your AI GF/BF
What hot new startups can give, hot new startups can take away. If you’re going to have an emotional attachment to one of these things, it’s only prudent to make sure your ability to run it is independent of the whims and financial fortunes of some random company. Download the weights, keep an up to date local copy of all the context the AI uses as its “memory”.
See point 1, knowing roughly how the thing works is helpful for this.
If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are “psychologically harmful”, that’s pretty suspicious. Humans tend to have pretty good psychological intuition.
This seems demonstrably wrong in the case of technology.
People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits.
As I mentioned in the post as well as many comments here, how about AI partner startups demonstrating that these positive effects dominate negative effects, rather than vice-versa? Why do we hold addictive psycho-technology (which AI partner tech is, on the face of it, because “lower love” is also a form of addiction, and you cannot yet have a “higher love”, i.e., shared fate with AI partners in the present wave, they are not yet learning, conscious, and are not moral patients) to a different standard than new medications?
On the margin, more people would stay in relationships they’d be better off leaving because they fear being alone and don’t have the backup option of an AI relationship.
I commented on this logic here, I think this doesn’t make sense. People probably mostly stay in relationships they’d be better off leaving either because there is a roller-coaster dynamic they also partially enjoy (in which case knowing there is AI waiting for them outside of the relationship probably doesn’t help), or for financial reasons or reasons of convenience. People may fear be physically alone (at home, from the social perspective, etc.), but hardly many people are particularly attached to the notion of being “in a relationship” so much that the perspective of paying 20 bucks per month for their virtual AI friend will be sufficient to quiet their anxiety about leaving a human partner. I cannot exclude this, but I think there are really few people like this.
People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.
In the policy that I proposed, I made a point for these cases. Basically, a psychotherapist could “prescribe” an AI partner to a human in such cases.
Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are “psychologically harmful”, that’s pretty suspicious. Humans tend to have pretty good psychological intuition. And even if people happen to be wrong about what’s good for them in a particular case, taking away their rights should be counted as a very large cost.
AI partners won’t be “who” yet. That’s a very important qualification. As soon as AIs become conscious, and/or moral patients, of course there should be no restrictions. But without that, in your passage, you can replace “AI partner” or “image” with “heroin” and nothing qualitatively changes. Forbidding building companies that distribute heroin is not “taking out freedoms”, even though just taking heroin (or other drug) which you somehow obtained (found on the street, let’s say) is still a freedom.
You (and/or other people in the “ban AI romantic partners” coalition) should be trying to gather much more information about how this will actually affect people. I.e. running short term experiments, running long term experiments with prediction markets on the outcomes, etc. We need to know if the harms you predict are actually real. This issue is too serious for us to run a decade long prohibition with a “guess we were wrong” at the end of it.
How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments? All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects. So, clearly, we should adopt a general policy when it’s the duty of these startups to demonstrate that they products are safe for psychology of their users, long-term. Product safety regulations are normal, they exist in food, in medicine, in many other fields. Even current nascent AI psychotherapy tools need to pass a sort of clinical trials because they are classified as sort of “medical tools” and therefore are covered by these regulations.
Somehow moving AI romance outside of this umbrella of product categories regulated for safety and taking a complete laissez-faire approach just doesn’t make any sense. What if an AI partner created by a certain company really flames up misogyny in men who are using these tools? So far we even assumed, throughout these discussions, that AI partner startups will be “benign” and “good faith”, but if we take these assumptions we shouldn’t also regulate foods and lots of other things.
AI partners won’t be “who” yet. That’s a very important qualification.
I’d consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people’s basic freedoms. Maybe “free association” isn’t the right word to use, but there’s definitely an important right for which you’d be creating an exception. I’d also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person?
But without that, in your passage, you can replace “AI partner” or “image” with “heroin” and nothing qualitatively changes.
I don’t subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations.
How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments?
How much do you expect it would cost to do these experiments? 500 000 dollars? Let’s say 2 million just to be safe. Presumably you’re going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it’s worth putting some fairly authoritarian restrictions on the actions of millions of people, it’s worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list?
All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects.
One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you’re considering. (And you always want to have a control group, corresponding to “no change to existing policy”.)
I’m very very skeptical of this idea, as one generally should be of attempts to carve exceptions out of people’s basic rights and freedoms. If you’re wrong, then your policy recommendations would cause a very large amount of damage. This post unfortunately seems to have little discussion of the drawbacks of the proposed policy, only of the benefits. But it would surely have many drawbacks. People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits. On the margin, more people would stay in relationships they’d be better off leaving because they fear being alone and don’t have the backup option of an AI relationship. People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.
Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are “psychologically harmful”, that’s pretty suspicious. Humans tend to have pretty good psychological intuition. And even if people happen to be wrong about what’s good for them in a particular case, taking away their rights should be counted as a very large cost.
You (and/or other people in the “ban AI romantic partners” coalition) should be trying to gather much more information about how this will actually affect people. I.e. running short term experiments, running long term experiments with prediction markets on the outcomes, etc. We need to know if the harms you predict are actually real. This issue is too serious for us to run a decade long prohibition with a “guess we were wrong” at the end of it.
For those readers who hope to make use of AI romantic companions, I do also have some warnings:
You should know in a rough sense how the AI works and the ways in which it’s not a human.
For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they “re-read” everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
A human might think about you even if they’re not actively talking to you, but rather just going about their day. Of course, most of the time they aren’t thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don’t go about their day or have an independent existence at all really, they’re just there to respond to prompts.
In the future, some of these facts may change, the AIs may become more human-like, or at least more agent-like. You should know all such details about your AI companion of choice.
Not your weights, not your AI GF/BF
What hot new startups can give, hot new startups can take away. If you’re going to have an emotional attachment to one of these things, it’s only prudent to make sure your ability to run it is independent of the whims and financial fortunes of some random company. Download the weights, keep an up to date local copy of all the context the AI uses as its “memory”.
See point 1, knowing roughly how the thing works is helpful for this.
A backup you haven’t tested isn’t a backup.
This seems demonstrably wrong in the case of technology.
As I mentioned in the post as well as many comments here, how about AI partner startups demonstrating that these positive effects dominate negative effects, rather than vice-versa? Why do we hold addictive psycho-technology (which AI partner tech is, on the face of it, because “lower love” is also a form of addiction, and you cannot yet have a “higher love”, i.e., shared fate with AI partners in the present wave, they are not yet learning, conscious, and are not moral patients) to a different standard than new medications?
I commented on this logic here, I think this doesn’t make sense. People probably mostly stay in relationships they’d be better off leaving either because there is a roller-coaster dynamic they also partially enjoy (in which case knowing there is AI waiting for them outside of the relationship probably doesn’t help), or for financial reasons or reasons of convenience. People may fear be physically alone (at home, from the social perspective, etc.), but hardly many people are particularly attached to the notion of being “in a relationship” so much that the perspective of paying 20 bucks per month for their virtual AI friend will be sufficient to quiet their anxiety about leaving a human partner. I cannot exclude this, but I think there are really few people like this.
In the policy that I proposed, I made a point for these cases. Basically, a psychotherapist could “prescribe” an AI partner to a human in such cases.
AI partners won’t be “who” yet. That’s a very important qualification. As soon as AIs become conscious, and/or moral patients, of course there should be no restrictions. But without that, in your passage, you can replace “AI partner” or “image” with “heroin” and nothing qualitatively changes. Forbidding building companies that distribute heroin is not “taking out freedoms”, even though just taking heroin (or other drug) which you somehow obtained (found on the street, let’s say) is still a freedom.
How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments? All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects. So, clearly, we should adopt a general policy when it’s the duty of these startups to demonstrate that they products are safe for psychology of their users, long-term. Product safety regulations are normal, they exist in food, in medicine, in many other fields. Even current nascent AI psychotherapy tools need to pass a sort of clinical trials because they are classified as sort of “medical tools” and therefore are covered by these regulations.
Somehow moving AI romance outside of this umbrella of product categories regulated for safety and taking a complete laissez-faire approach just doesn’t make any sense. What if an AI partner created by a certain company really flames up misogyny in men who are using these tools? So far we even assumed, throughout these discussions, that AI partner startups will be “benign” and “good faith”, but if we take these assumptions we shouldn’t also regulate foods and lots of other things.
I’d consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people’s basic freedoms. Maybe “free association” isn’t the right word to use, but there’s definitely an important right for which you’d be creating an exception. I’d also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person?
I don’t subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations.
How much do you expect it would cost to do these experiments? 500 000 dollars? Let’s say 2 million just to be safe. Presumably you’re going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it’s worth putting some fairly authoritarian restrictions on the actions of millions of people, it’s worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list?
One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you’re considering. (And you always want to have a control group, corresponding to “no change to existing policy”.)