We need to morally stigmatize anyone associated with building AGI/ASI
This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don’t want AI to become a partisan issue, and then… go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?
If you are talking about AI and listing various ways that AI could be used against conservatives, what’s to prevent various liberals from going “hey good idea, let’s build AI to shaft conservatives in exactly the way Geoffrey is describing”? Or from just generally going “aha conservatives seem to think of AI as a particularly liberal issue, I hadn’t thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?”
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
the big AI companies wants its users to develop their most significant, intimate relationships with their AIs, not with other humans
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans.
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Many ambitious young women see socialising as the only way to wealth and status; if they start without the backing of a prominent family or peer group, this often means sex work (sic! -- S.K.) pandering to spoiled millionaires and billionaires.
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
instead being “demonizing anyone associated with building AI, including much of the AI safety community itself”.
I’m confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming “rationalists” in your list of groups that are trying to destroy religion
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.
This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don’t want AI to become a partisan issue, and then… go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?
If you are talking about AI and listing various ways that AI could be used against conservatives, what’s to prevent various liberals from going “hey good idea, let’s build AI to shaft conservatives in exactly the way Geoffrey is describing”? Or from just generally going “aha conservatives seem to think of AI as a particularly liberal issue, I hadn’t thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?”
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
Which was explicitly created by Musk to be the anti-woke AI so I really don’t see how it would connect to liberals.
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.