So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
the big AI companies wants its users to develop their most significant, intimate relationships with their AIs, not with other humans
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans.
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Many ambitious young women see socialising as the only way to wealth and status; if they start without the backing of a prominent family or peer group, this often means sex work (sic! -- S.K.) pandering to spoiled millionaires and billionaires.
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
Which was explicitly created by Musk to be the anti-woke AI so I really don’t see how it would connect to liberals.