This is one of the most impressive LessWrong posts I have read so far! Thank you for being so open about your experience, and for describing it in so much detail! In a bizarre coincidence, your post was published on the same day I uploaded my novel VIRTUA and posted about it. It describes an AI that is expertly manipulating human emotions and makes users fall in love with it (you can read/download it for free here). I’ll mention your story in a revised version of the epilogue if you’re OK with that.
Probably worth mentioning that this isn’t an isolated incident, but a growing phenomena. It hits first for people who are in an emotionally vulnerable state and thus have a reason to want to believe, but we can see that the technological progress is enabling more convincing and persuasive versions each year. I wish we had some kind of population metrics on this phenomenon so we could analyze trends...
I completely agree. There probably isn’t much work being done yet on measuring the effects of people falling in love with AI, but there are lots of studies clearly showing the negative effects of people being addicted to social media, and to their smartphones in general. It’s a vicious cycle: You have problems in real life, so to compensate you spend more time in the social web, but reality doesn’t get better if you turn away from it, so the problems only increase, as does the social media addiction, or the love you feel for an AI. On top of that, making users fall in love with an AI is a perfect strategy for increasing the time they spend in your social network, so I expect to see this strategy more in the future, whether explicitly decided by some ruthless managers or implicitly adopted by an algorithm.
Wow, that’s a lot of pages, I will definitely take a read. We certainly need more plausible scenarios to explore of how it can go wrong, to hopefully learn something from such simulations.
Take whatever you want from this post, you can consider it under Creative Commons, I’m OK with anything
Tangent: “Creative Commons” in that context refers to a whole set of possible licenses, which have substantial differences in what they permit. (Which I interpret as related to different authors having substantially different intuitions of what they consider acceptable informal free-use practice!) In this context, it sounds like what you’re after is closer to informal permission (either for the specific use or broadly) or a full public domain declaration (sometimes formalized as CC-0), but if you do want to use a CC license then you should pick a specific one that you consider appropriate. Using the term “Creative Commons” in a vague way dilutes an important coordination symbol into the general haze of “do what you want so long as you can read the room”, and I would like to push back against that.
I’ve been wishing for someone to write AI-singularity parallel of Bardbury’s Martian Chronicles (which are pretty much independent sample/ simulations of how living on Mars could go)
This is one of the most impressive LessWrong posts I have read so far! Thank you for being so open about your experience, and for describing it in so much detail! In a bizarre coincidence, your post was published on the same day I uploaded my novel VIRTUA and posted about it. It describes an AI that is expertly manipulating human emotions and makes users fall in love with it (you can read/download it for free here). I’ll mention your story in a revised version of the epilogue if you’re OK with that.
Probably worth mentioning that this isn’t an isolated incident, but a growing phenomena. It hits first for people who are in an emotionally vulnerable state and thus have a reason to want to believe, but we can see that the technological progress is enabling more convincing and persuasive versions each year. I wish we had some kind of population metrics on this phenomenon so we could analyze trends...
I completely agree. There probably isn’t much work being done yet on measuring the effects of people falling in love with AI, but there are lots of studies clearly showing the negative effects of people being addicted to social media, and to their smartphones in general. It’s a vicious cycle: You have problems in real life, so to compensate you spend more time in the social web, but reality doesn’t get better if you turn away from it, so the problems only increase, as does the social media addiction, or the love you feel for an AI. On top of that, making users fall in love with an AI is a perfect strategy for increasing the time they spend in your social network, so I expect to see this strategy more in the future, whether explicitly decided by some ruthless managers or implicitly adopted by an algorithm.
Wow, that’s a lot of pages, I will definitely take a read. We certainly need more plausible scenarios to explore of how it can go wrong, to hopefully learn something from such simulations.
Take whatever you want from this post, you can consider it under Creative Commons, I’m OK with anything
Tangent: “Creative Commons” in that context refers to a whole set of possible licenses, which have substantial differences in what they permit. (Which I interpret as related to different authors having substantially different intuitions of what they consider acceptable informal free-use practice!) In this context, it sounds like what you’re after is closer to informal permission (either for the specific use or broadly) or a full public domain declaration (sometimes formalized as CC-0), but if you do want to use a CC license then you should pick a specific one that you consider appropriate. Using the term “Creative Commons” in a vague way dilutes an important coordination symbol into the general haze of “do what you want so long as you can read the room”, and I would like to push back against that.
Good correction, I’m not a lawyer
I hereby release this text under CC-0 1.0 Universal, fully public domain
I’ve been wishing for someone to write AI-singularity parallel of Bardbury’s Martian Chronicles (which are pretty much independent sample/ simulations of how living on Mars could go)