Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, & autonomous agents back in the 90s.
geoffreymiller
As the guy most quoted in this Verge article, it’s amusing to see so many LessWrong folks—who normally pride themselves on their epistemic integrity and open-mindedness—commenting with such overconfidence about my talk that they haven’t actually read or seen, at a conference they’ve never been to, which is grounded in a set of conservative values and traditionalist world-views that they know less than nothing about.
I’ll post the actual text of my talk in due course, after I can link to the NatCon video whenever it’s released. (My actual talk covered AI X-risk and the game theory of the US/China arms race in some detail).
For the moment, I’ll just say this: if we want to fight the pro-accelerationist guys who have a big influence on Trump at the moment, but who show total contempt for AI safety (e.g. David Sacks, Mark Andreessen), then we can do it effectively through the conservative influencers who are advocating for AI safety, an AI pause, AI regulation, and AI treaties.
The NatCons have substantial influence in Washington at the moment. If we actually care about AI safety more than we care about partisan politics or leftist virtue-signaling, it might be a good idea to engage with NatCons, learn about their views (with actual epistemic humility and curiosity), and find whatever common ground we can to fight against the reckless e/accs.
habryka—‘If you don’t care about future people’—but why would any sane person not care at all about future people?
You offer a bunch of speculative math about longevity vs extinction risk.
OK, why not run some actual analysis on which is more likely to promote longevity research: direct biomedical research on longevity, or indirect AI research on AGI in hopes that it somehow, speculatively, solves longevity?
The AI industry is currently spending something on the order of $200 billion a year on research. The biomedical research on longevity, by contrast, is currently far less than $10 billion a year.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
Excited to see this!
Well done to the ‘AI in Context’ team.
I’ll share the video on X.
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
TsviBT—thanks for a thoughtful comment.
I understand your point about labelling industries, actions, and goals as evil, but being cautious about labelling individuals as evil.
But I don’t think it’s compelling.
You wrote ‘You’re closing off lines of communication and gradual change. You’re polarizing things.’
Yes, I am. We’ve had open lines of communication between AI devs and AI safety experts for a decade. We’ve had pleas for gradual change. Mutual respect, and all that. Trying to use normal channels of moral persuasion. Well-intentioned EAs going to work inside the AI companies to try to nudge them in safer directions.
None of that has worked. AI capabilities development is outstripping AI safety developments at an ever-increasing rate. The financial temptations to stay working inside AI companies keep increasing, even as the X risks keep increasing. Timelines are getting shorter.
The right time to ‘polarize things’ is when we still have some moral and social leverage to stop reckless ASI development. The wrong time is after it’s too late.
Altman, Amodei, Hassabis, and Wang are buying people’s souls—paying them hundreds of thousands or millions of dollars a year to work on ASI development, despite most of their workers they supervise knowing that they’re likely to be increasing extinction risk.
This isn’t just a case of ‘collective evil’ being done by otherwise good people. This is a case of paying people so much that they ignore their ethical qualms about what they’re doing. That makes the evil very individual, and very specific. And I think that’s worth pointing out.
Sure. But if an AI company grows an ASI that extinguishes humanity, who is left to sue them? Who is left to prosecute them?
The threat of legal action for criminal negligence is not an effective deterrent if there is no criminal justice system left, because there is no human species left.
Drake—this seems like special pleading from an AI industry insider.
You wrote ‘I think working at an AI lab requires less failure of moral character than, say, working at a tobacco company, for all that the former can have much worse effects on the world.’
That doesn’t make sense to me. Tobacco kills about 8 million people a year globally. ASI could kill about 8 billion. The main reason that AI lab workers think that their moral character is better than that of tobacco industry workers is that the tobacco industry has already been morally stigmatized over the last several decades—whereas the AI industry has not yet been morally stigmatized in proportion to its likely harms.
Of course, ordinary workers in any harm-imposing industry can always make the argument that they’re good (or at least ethically mediocre) people, that they’re just following orders, trying to feed their families, weren’t aware of the harms, etc.
But that argument does not apply to smart people working in the AI industry—who have mostly already been exposed to the many arguments that AGI/ASI is a uniquely dangerous technology. And their own CEOs have already acknowledged these risks. And yet people continue to work in this industry.
Maybe a few workers at a few AI companies might be having a net positive impact in reducing AI X-risk. Maybe you’re one of the lucky few. Maybe.
Richard—I think you’re just factually wrong that ‘people are split on whether AGI/ASI is an existential threat’.
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them.
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
Knight—thanks again for the constructive engagement.
I take your point that if a group is a tiny and obscure minority, and they’re calling the majority view ‘evil’, and trying to stigmatize their behavior, that can backfire.
However, the surveys and polls I’ve seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with ‘AI Notkilleveryoneism’. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it’s already pretty close to feasible—if not already accomplished.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers).
Ben—your subtext here seems to be that only lower-class violent criminals are truly ‘evil’, whereas very few middle/upper-class white-collar people are truly evil (with a few notable exceptions such as SBF or Voldemort) -- with the implications that the majority of ASI devs can’t possibly be evil in the ways I’ve argued.
I think that doesn’t fit the psychological and criminological research on the substantial overlap between psychopathy and sociopathy, and between violent and non-violent crime.
It also doesn’t fit the standard EA point that a lot of ‘non-evil’ people can get swept up in doing evil collective acts as parts of collectively evil industries, such as slave-trading, factory farming, Big Tobacco, the private prison system, etc. - but that often, the best way to fight such industries is to use moral stigmatization.
Hi Knight, thanks for the thoughtful reply.
I’m curious whether you read the longer piece about moral stigmatization that I linked to at EA Forum? It’s here, and it addresses several of your points.
I have a much more positive view about the effectiveness of moral stigmatization, which I think has been at the heart of almost every successful moral progress movement in history. The anti-slavery movement stigmatized slavery. The anti-vivisection movement stigmatized torturing animals for ‘experiments’. The women’s rights movement stigmatized misogyny. The gay rights movement stigmatized homophobia.
After the world wars, biological and chemical weapons were not just regulated, but morally stigmatized. The anti-landmine campaign stigmatized landmines.
Even in the case of nuclear weapons, the anti-nukes peace movement stigmatized the use and spread of nukes, and was important in nuclear non-proliferation, and IMHO played a role in the heroic individual decisions by Arkhipov and others not to use nukes when they could have.
Regulation and treaties aimed to reduce the development, spread, and use of Bad Thing X, without moral stigmatization of Bad Thing X, doesn’t usually work very well. Formal law and informal social norms must typically reinforce each other.
I see no prospect for effective, strongly enforced regulation of ASI development without moral stigmatization of ASI development. This is because, ultimately, ‘regulation’ relies on the coercive power of the state—which relies on agents of the state (e.g. police, military, SWAT teams, special ops teams) being willing to enforce regulations even against people with very strong incentives not to comply. And these agents of the state simply won’t be willing to use government force against ASI devs violating regulations unless these agents already believe that the regulations are righteous and morally compelling.
Yes, it takes courage to call people out as evil, because you might be wrong, you might unjustly ruin their lives, you might have mistakenly turned them into scapegoat, etc. Moral stigmatization carries these risks. Always has.
And people understand this. Which is why, if we’re not willing to call the AGI industry leaders and devs evil, then people will see us failing to have the courage of our convictions. They will rightly see that we’re not actually confident enough in our judgments about AI X-risk to take the bold step of pointing fingers and saying ‘WRONG!’.
So, we can hedge our social bets, and try to play nice with the AGI industry, and worry about making such mistakes. Or, we can save humanity.
Ben—so, we’re saying the same things, but you’re using gentler euphemisms.
I say ‘evil’; you say ‘deontologically prohibited’.
Given the urgency of communicating ASI extinction risks to the public, why is this the time for gentle euphemisms?
A plea for having the courage to morally stigmatize the people working in the AGI industry:
I agree with Nate Soares that we need to show much more courage in publicly sharing our technical judgments about AI risks—based on our understanding of AI, the difficulties of AI alignment, the nature of corporate & geopolitical arms races, the challenges of new technology regulation & treaties, etc.
But we also need to show much more courage in publicly sharing our social and moral judgments about the evils of the real-life, flesh-and-blood people who are driving these AI risks—specifically, the people leading the AI industry, working in it, funding it, lobbying for it, and defending it on social media.
Sharing our technical concerns about these abstract risks isn’t enough. We also have to morally stigmatize the specific groups of people imposing these risks on all of us.
We need the moral courage to label other people evil when they’re doing evil things.
If we don’t do this, we look like hypocrites who don’t really believe that AGI/ASI would be dangerous.
Moral psychology teaches us that moral judgments are typically attached not just to specific actions, or to emergent social forces (e.g. the ‘Moloch’ of runaway competition), or to sad Pareto-inferior outcomes of game-theoretic dilemmas, but to people. We judge people. As moral agents. Yes, even including AI researchers and devs.
If we want to make credible claims that ‘the AGI industry is recklessly imposing extinction risks on all of our kids’, and we’re not willing to take the next step of saying ‘and also, the people working on AGI are reckless and evil and wrong and should be criticized, stigmatized, ostracized, and punished’, then nobody will take us seriously.
As any parent knows, if some Bad Guy threatens your kids, you defend your kids and you denounce the Bad Guy. Your natural instinct is to rally social support to punish them. This is basic social primate parenting, of the sort that’s been protecting kids in social groups for tens of millions of years.
If you don’t bother to rally morally outraged support against those who threaten kids, then the threat wasn’t real. This is how normal people think. And rightfully so.
So why don’t we have the guts to vilify the AGI leaders, devs, investors, and apologists, if we’re so concerned about AGI risk?
Because too many rationalists, EAs, tech enthusiasts, LessWrong people, etc still see those AI guys as ‘in our tribe’, based on sharing certain traits we hold dear—high IQ, high openness, high decoupling, Aspy systematizing, Bay Area Rationalist-adjacent, etc. You might know some of the people working in OpenAI, Anthropic, DeepMind, etc. -- they might be your friends, housemates, neighbors, relatives, old school chums, etc.
But if you take seriously their determination to build AGI/ASI—or even to work in ‘AI safety’ at those companies, doing their performative safety-washing and PR—then they are not the good guys.
We have to denounce them as the Bad Guys. As traitors to our species. And then, later, once they’ve experienced the most intense moral shame they’ve ever felt, and gone through a few months of the worst existential despair they’ve ever felt, and they’ve suffered the worst social ostracism they’ve ever experienced, we need to offer them a path towards redemption—by blowing the whistle on their former employers, telling the public what they’ve seen on the inside of the AI industry, and joining the fight against ASI.
This isn’t ‘playing dirty’ or ‘giving in to our worst social instincts’. On the contrary. Moral stigmatization and ostracism of evil-doers is how social primate groups have enforced cooperation norms for millions of years. It’s what keeps the peace, and supports good social norms, and protects the group. If we’re not willing to use the moral adaptations that evolved specifically to protect our social groups from internal and external threats, then we’re not really taking those threats seriously.
PS I outlined this ‘moral backlash’ strategy for slowing reckless AI development in this EA Forum point
Here’s the thing, just_browsing.
Some people want to stop human extinction from unaligned Artificial Superintelligence that’s developed by young men consumed by reckless, misanthropic hubris—by using whatever persuasion and influence techniques actually work on most people.
Other people want to police ‘vibes’ and ‘cringe’ on social media, and feel morally superior to effective communicators.
Kat Woods is the former.
This is really good, and it’ll be required reading for my new ‘Psychology and AI’ class that I’ll teach next year.
Students are likely to ask ‘If the blob can figure out so much about the world, and modify its strategies so radically, why does it still want sugar? Why not just decide to desire something more useful, like money, power, and influence?’
Shutting down OpenAI entirely would be a good ‘high level change’, at this point.
Well I’m seeing no signs at all, whatsoever, that OpenAI would ever seriously consider slowing, pausing, or stopping its quest for AGI, no matter what safety concerns get raised. Sam Altman seems determined to develop AGI at all costs, despite all risks, ASAP. I see OpenAI as betraying virtually all of its founding principles, especially since the strategic alliance with Microsoft, and with the prospect of colossal wealth for its leaders and employees.
At this point, I’d rather spend $5-7 trillion on a Butlerian Jihad to stop OpenAI’s reckless hubris.
Thanks for the tag. I’ve just started to read the comments here, and wrote an initial reply.