Kaj: well, as I argued here, regulation and treaties aren’t enough to stop reckless AI development. We need to morally stigmatize anyone associated with building AGI/ASI. That’s the main lever of social power that we have.
I see zero prospect for ‘technical AI safety work’ solving the problem of slowing down reckless AI development. And it’s often little more than safety-washing by AI companies to make it look like they take safety seriously—while they continue to push AGI capabilities development as hard as ever.
I think a large proportion of the Rationalist/EA/LessWrong community is very naive about this, and that we’re being played by bad actors in the AI industry.
We need to morally stigmatize anyone associated with building AGI/ASI.
No, I think our top priority should be to get people to come to an accurate understanding of the risks associated with AI. I think this requires being able to distinguish between real risks and fake risks. Not everyone associated with AI deserves to be morally stigmatized, and while I agree we should be willing to accept some collateral damage “stigmatizing anyone associated with building AGI” with an implied “by any means necessary[1]” is IMO not a reasonable strategy.
My guess is you do consider some things across the line, but it seems likely to me that the lines of what you consider acceptable to do in the pursuit of stigmatizing people is quite different from me (and my guess is also most other people here)
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties—despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.
But it won’t actually stop AGI/ASI development fast enough or decisively enough to save humanity.
To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development.
Note that I’m not calling for violence. Stigmatization isn’t violence. It’s leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.
Then, what should those people actually do with that knowledge?
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
Matrice—for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here
IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company’s products boycotting that company.
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Matrice—maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.
But as a practical matter, we can’t expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.
It’s virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people’s digital lives, and that are involved in AGI/ASI development.
I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they’re doing.
(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.
We need to morally stigmatize anyone associated with building AGI/ASI
This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don’t want AI to become a partisan issue, and then… go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?
If you are talking about AI and listing various ways that AI could be used against conservatives, what’s to prevent various liberals from going “hey good idea, let’s build AI to shaft conservatives in exactly the way Geoffrey is describing”? Or from just generally going “aha conservatives seem to think of AI as a particularly liberal issue, I hadn’t thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?”
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
the big AI companies wants its users to develop their most significant, intimate relationships with their AIs, not with other humans
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans.
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Many ambitious young women see socialising as the only way to wealth and status; if they start without the backing of a prominent family or peer group, this often means sex work (sic! -- S.K.) pandering to spoiled millionaires and billionaires.
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
instead being “demonizing anyone associated with building AI, including much of the AI safety community itself”.
I’m confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming “rationalists” in your list of groups that are trying to destroy religion
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.
Kaj: well, as I argued here, regulation and treaties aren’t enough to stop reckless AI development. We need to morally stigmatize anyone associated with building AGI/ASI. That’s the main lever of social power that we have.
I see zero prospect for ‘technical AI safety work’ solving the problem of slowing down reckless AI development. And it’s often little more than safety-washing by AI companies to make it look like they take safety seriously—while they continue to push AGI capabilities development as hard as ever.
I think a large proportion of the Rationalist/EA/LessWrong community is very naive about this, and that we’re being played by bad actors in the AI industry.
No, I think our top priority should be to get people to come to an accurate understanding of the risks associated with AI. I think this requires being able to distinguish between real risks and fake risks. Not everyone associated with AI deserves to be morally stigmatized, and while I agree we should be willing to accept some collateral damage “stigmatizing anyone associated with building AGI” with an implied “by any means necessary[1]” is IMO not a reasonable strategy.
My guess is you do consider some things across the line, but it seems likely to me that the lines of what you consider acceptable to do in the pursuit of stigmatizing people is quite different from me (and my guess is also most other people here)
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties—despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.
But it won’t actually stop AGI/ASI development fast enough or decisively enough to save humanity.
To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development.
Note that I’m not calling for violence. Stigmatization isn’t violence. It’s leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Peace out :)
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
Matrice—for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here
IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company’s products boycotting that company.
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Matrice—maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.
But as a practical matter, we can’t expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.
It’s virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people’s digital lives, and that are involved in AGI/ASI development.
I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they’re doing.
(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.
This sounds like a potentially sensible strategy to me. But you say yourself in the intro that you don’t want AI to become a partisan issue, and then… go on to speak about it in highly partisan terms, as a thing pushed by Those Evil Liberals?
If you are talking about AI and listing various ways that AI could be used against conservatives, what’s to prevent various liberals from going “hey good idea, let’s build AI to shaft conservatives in exactly the way Geoffrey is describing”? Or from just generally going “aha conservatives seem to think of AI as a particularly liberal issue, I hadn’t thought of my stance on AI before but if conservatives hate it as a leftist globalist issue then I should probably support it, as a leftist globalist myself?”
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
I agree that this is a sensible strategy to pursue, and it’s why I originally expected to strong-upvote your post since I thought that this is exactly something that we need to do.
But you could do that in a way that emphasizes commonalities and focuses on the concrete risks rather than using partisan language. For instance, when you say
… then this doesn’t really even feel true? Okay, Grok’s companion feature is undeniably sketchy, but other than that, I haven’t heard any AI developers saying or hinting that they would want their users to develop their most significant relationships with AIs rather than humans. There are a few explicit companion apps, but those largely seem to be oriented at complementing rather than replacing human relationships.
If anything, many of the actions of “the big AI companies” seem to indicate the opposite. OpenAI seemed to be caught off balance when they made GPT-5 significantly less sycophantic and emotionally expressive, as if they hadn’t even realized or considered that their users might have been developing emotional bonds with 4o that the update would disrupt. And I recall seeing screenshots where Claude explicitly reminds the user that it’s a chatbot and it would be inappropriate for it to take on the role of a loved human. ChatGPT, Claude, and Gemini are all trained to refuse sexual role-play or writing and while those safeguards are imperfect, the companies training them do seem to consider that a similar level priority as preventing them from producing any other kind of content deemed harmful.
So this quote from your talk doesn’t read to me as “truthfully emphasizing the way that AI might undermine sanctity of marriage”, it reads to me more as “telling a narrative of how AI might undermine the sanctity of marriage by tapping into the worst strawmen stereotypes that conservatives have about liberals”. (With a possible exception for X/Grok. I could accept this description as at least somewhat truthful if you were talking about just them, rather than “the big AI companies” in general.)
And it feels to me totally unnecessary, since you could just as well have said something like
“You wouldn’t usually think of sanctity of marriage as a thing that liberals care about, but there have been such worrying developments with regard to people interacting with chatbots that even they are concerned about the effect that AI has on relationships. They have expressed concerns about the way that some people develop deep delusional relationships with sycophantic AIs that tell the users exactly what the users want to hear, and how people might start preferring this over the trickiness and messiness of real human relationships. Can you imagine how worrying things are getting if even liberals think that technology might be intruding too much on something intimate and sacred? We disagree on a lot of things, but on this issue, they are with us.”
Except that there also is Meta with Zuckerberg’s AI vision. Fortunately, OpenAI, Anthropic and GDM didn’t hint on planning to create AI companions.
@geoffreymiller: as for AIs destroying marriage, I see a different mechanism for this. Nick Bostrom’s Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds. And that’s ignoring the possibilities like the one proposed in this scenario:
Thanks for the link, I hadn’t followed what Zuckerberg is up to and also wasn’t counting Meta as one of the major AI players, though probably I should. Apparently, he does also claim that he intends AIs to be complements rather than substitutes, but I do agree with Zvi that his revealed preferences with promoting AI-generated content over human content point the other way. So I guess there’s two big AI companies that geoffrey’s characterization kinda applies to.
Which was explicitly created by Musk to be the anti-woke AI so I really don’t see how it would connect to liberals.
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.