Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties—despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.
But it won’t actually stop AGI/ASI development fast enough or decisively enough to save humanity.
To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development.
Note that I’m not calling for violence. Stigmatization isn’t violence. It’s leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.
Then, what should those people actually do with that knowledge?
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
Matrice—for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here
IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company’s products boycotting that company.
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Matrice—maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.
But as a practical matter, we can’t expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.
It’s virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people’s digital lives, and that are involved in AGI/ASI development.
I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they’re doing.
(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties—despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.
But it won’t actually stop AGI/ASI development fast enough or decisively enough to save humanity.
To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development.
Note that I’m not calling for violence. Stigmatization isn’t violence. It’s leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Peace out :)
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
Matrice—for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here
IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company’s products boycotting that company.
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Matrice—maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.
But as a practical matter, we can’t expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.
It’s virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people’s digital lives, and that are involved in AGI/ASI development.
I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they’re doing.
(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.