Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, & autonomous agents back in the 90s.
geoffreymiller
Russell—I take your point that in most alternative timelines, we would already be dead, decades ago, due to nuclear war, and I often make that point in discussing AI risk, to hammer home the point that humanity does not have any magical ‘character armor’ that will protect us from extinction, and that nobody is coming to save us if we’re dumb enough to develop AGI/ASI.
However, I disagree with the claim that ‘our current situation is not one to preserve’. I know people in the military/intelligence communities who work full time on nuclear safety, nuclear non-proliferation, counter-terrorism, etc. There are tens of thousands of smart people across dozens of agencies across many countries who spend their entire lives reducing the risks of nuclear war. They’re not just activists making noise from outside the centers of power. They’re inside the government, with high security clearances, respected expertise, and real influence. I’m not saying the risk of nuclear war has gone to zero, but it is taken very seriously by all the major world governments.
By contrast, AI safety remains something of a fringe issue, with virtually no representation inside governments, corporations, media, academia, or any other power centers. That’s the thing that needs to change.
We don’t need a ‘hail Mary’ where we develop AGI/ASI and then hope that it can reduce nuclear risk more than it increases all other risks.
I didn’t say that all Rationalists are evil. I do consider myself a Rationalist in many ways, and I’ve been an active member of LessWrong and EA for years, and have taught several college courses on EA that include Rationalist readings.
What I did say, in relation to my claim that ‘they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs’, is that ‘This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists’.
The ‘common denominator’ language implies overlap, not total agreement.
And I think there is substantial overlap among these communities—socially, financially, ethically, geographically.
Many Rationalists have been absolutely central to analyzing AI risks, advocating for AI safety, and fighting the good fight. But many others have gone to work for AI companies, often in ‘AI safety’ roles that do not actually slow down AI capabilities development. And many have become e/accs or transhumanists who see humanity as a disposable stepping-stone to something better.
Yes. And way too much ‘AI safety work’ boils down to ‘getting paid huge amounts by AI companies to do safety-washing & public relations, to kinda sorta help save humanity, but without upsetting my Bay Area roommates & friends & lovers who work on AI capabilities development’.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties—despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.
But it won’t actually stop AGI/ASI development fast enough or decisively enough to save humanity.
To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development.
Note that I’m not calling for violence. Stigmatization isn’t violence. It’s leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.
Well my toddler pronounces it ‘pee doom’
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
Kaj: well, as I argued here, regulation and treaties aren’t enough to stop reckless AI development. We need to morally stigmatize anyone associated with building AGI/ASI. That’s the main lever of social power that we have.
I see zero prospect for ‘technical AI safety work’ solving the problem of slowing down reckless AI development. And it’s often little more than safety-washing by AI companies to make it look like they take safety seriously—while they continue to push AGI capabilities development as hard as ever.
I think a large proportion of the Rationalist/EA/LessWrong community is very naive about this, and that we’re being played by bad actors in the AI industry.
PS I’ve just posted the full text of my NatCon talk on AI risk here, along with a lot of introductory context. It might help guide a more constructive discussion here, insofar as people can see what I actually said.
habryka—regarding what ‘aggression’ is, I’m coming to this from the perspective of having taught courses on animal behavior and human evolution for 35 years.
When biological scientists speak of ‘aggression’, we are referring to actual physical violence, e.g. hitting, biting, dismembering, killing, eating, within or between species. We are not referring to vocalizations, or animal signals, or their modern digital equivalents.
When modern partisan humans refer to ‘aggression’ metaphorically, this collapses the distinction between speech and violence. Which is, of course, what censors want, in order to portray speech that they don’t like as if it’s aggravated assault. This has become a standard chant on the Left: ‘speech = violence’.
I strongly disagree with that framing, because it is almost always an excuse for censorship, deplatforming, and ostracizing of political rivals.
I think to maintain the epistemic norms of the Rationality community, we must be very careful not to equate ‘verbal signals we don’t like’ with ‘acts of aggression’.
dr_s: How many MAGA supporters have you actually talked with, about AI safety issues?
It sounds like you have a lot of views on what they may or may not believe. I’m not sure how well-calibrated your views are.
Do you have a decent sample size for making your generalizations based on real interactions with real people, or are your impressions based mostly on mainstream news portrayals of MAGA supporters?
Richard—it’s true that not many people in the AI safety community are MAGA supporters, and that not many MAGA supporters are in the AI safety community.
The question is, why? Many on the Left, especially those involved in tech, have the stereotype that MAGA supporters are simply too stupid to understand AI safety issues. As a result, they simply haven’t bothered to reach out to the Right—and they socially ostracize and exclude anyone who seems to be on the Right.
Would Anthropic be excited to hire an overt MAGA supporter to join their AI safety team—however smart, competent, and committed they were? I doubt it.
You accused me of being ‘overly aggressive’. I was pointing out that tweets aren’t acts of aggression. Shooting people in the neck is.
As far as I can remember, I’ve never called for violence, on any topic, in any of the 80,000 posts I’ve shared on Twitter/X, to my 150,000 followers. So, I think your claim that my posts are ‘overly aggressive’ is poorly calibrated in relation to what actual aggression looks like.
That’s the relevance of the assassination of Charlie Kirk. A reminder that in this LessWrong bubble of ever-so-cautious, ever-so-rational, ever-so-epistemically-pure discourse, people can get very disconnected from the reality of high-stakes political debate and ideologically-driven terrorism.
Thanks for sharing. The archived version wasn’t up yet when I replied.
But I’m still uneasy using the Internet Archive to circumvent copyright.
‘Overly aggressive’ is what the shooter who just assassinated conservative Charlie Kirk was being.
Posting hot takes on X is not being ‘aggressive’.
This is not a day when I will tolerate any conflation of posting strong words on social media with committing actual aggressive violence.
This is not the day for that.
X (formerly known as Twitter) isn’t for ‘reasonable discourse’ according to the very specific and high epistemic standards of LessWrong.
X is for influence, persuasion, and impact. Which is exactly what AI safety advocates need, if we’re to have any influence, persuasion, or impact.
I’m comfortable using different styles, modes of discourse, and forms of outreach on X versus podcasts versus LessWrong versus my academic writing.
In related news, there’s an article in Financial Times yesterday about the tensions within the conservative movement concerned AI safety, as manifest at the National Conservatism conference last week: https://www.ft.com/content/d6aac7f1-b955-4c76-a144-1fe8d909f70b
It’s paywalled, and (unlike the AI industry) I don’t want to violate their copyright by reposting the text, but the title is:
‘Maga vs AI: Donald Trump’s Big Tech courtship risks a backlash
Silicon Valley’s sway in the White House is alarming populists in the president’s base’
Thanks for the tag. I’ve just started to read the comments here, and wrote an initial reply.
As the guy most quoted in this Verge article, it’s amusing to see so many LessWrong folks—who normally pride themselves on their epistemic integrity and open-mindedness—commenting with such overconfidence about my talk that they haven’t actually read or seen, at a conference they’ve never been to, which is grounded in a set of conservative values and traditionalist world-views that they know less than nothing about.
I’ll post the actual text of my talk in due course, after I can link to the NatCon video whenever it’s released. (My actual talk covered AI X-risk and the game theory of the US/China arms race in some detail).
For the moment, I’ll just say this: if we want to fight the pro-accelerationist guys who have a big influence on Trump at the moment, but who show total contempt for AI safety (e.g. David Sacks, Mark Andreessen), then we can do it effectively through the conservative influencers who are advocating for AI safety, an AI pause, AI regulation, and AI treaties.
The NatCons have substantial influence in Washington at the moment. If we actually care about AI safety more than we care about partisan politics or leftist virtue-signaling, it might be a good idea to engage with NatCons, learn about their views (with actual epistemic humility and curiosity), and find whatever common ground we can to fight against the reckless e/accs.
habryka—‘If you don’t care about future people’—but why would any sane person not care at all about future people?
You offer a bunch of speculative math about longevity vs extinction risk.
OK, why not run some actual analysis on which is more likely to promote longevity research: direct biomedical research on longevity, or indirect AI research on AGI in hopes that it somehow, speculatively, solves longevity?
The AI industry is currently spending something on the order of $200 billion a year on research. The biomedical research on longevity, by contrast, is currently far less than $10 billion a year.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
Seth—thanks for sharing that link; I hadn’t seen it, and I’ll read it.
I agree that we should avoid making AI safety either liberal-coded or conservative-coded.
But, we should not hesitate to use different messaging, emphasis, talking points, and verbal styles when addressing liberal or conservative audiences. That’s just good persuasion strategy, and it can be done with epistemic and ethical integrity.