Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, & autonomous agents back in the 90s.
geoffreymiller
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
Kaj: well, as I argued here, regulation and treaties aren’t enough to stop reckless AI development. We need to morally stigmatize anyone associated with building AGI/ASI. That’s the main lever of social power that we have.
I see zero prospect for ‘technical AI safety work’ solving the problem of slowing down reckless AI development. And it’s often little more than safety-washing by AI companies to make it look like they take safety seriously—while they continue to push AGI capabilities development as hard as ever.
I think a large proportion of the Rationalist/EA/LessWrong community is very naive about this, and that we’re being played by bad actors in the AI industry.
PS I’ve just posted the full text of my NatCon talk on AI risk here, along with a lot of introductory context. It might help guide a more constructive discussion here, insofar as people can see what I actually said.
My talk on AI risks at the National Conservatism conference last week
habryka—regarding what ‘aggression’ is, I’m coming to this from the perspective of having taught courses on animal behavior and human evolution for 35 years.
When biological scientists speak of ‘aggression’, we are referring to actual physical violence, e.g. hitting, biting, dismembering, killing, eating, within or between species. We are not referring to vocalizations, or animal signals, or their modern digital equivalents.
When modern partisan humans refer to ‘aggression’ metaphorically, this collapses the distinction between speech and violence. Which is, of course, what censors want, in order to portray speech that they don’t like as if it’s aggravated assault. This has become a standard chant on the Left: ‘speech = violence’.
I strongly disagree with that framing, because it is almost always an excuse for censorship, deplatforming, and ostracizing of political rivals.
I think to maintain the epistemic norms of the Rationality community, we must be very careful not to equate ‘verbal signals we don’t like’ with ‘acts of aggression’.
dr_s: How many MAGA supporters have you actually talked with, about AI safety issues?
It sounds like you have a lot of views on what they may or may not believe. I’m not sure how well-calibrated your views are.
Do you have a decent sample size for making your generalizations based on real interactions with real people, or are your impressions based mostly on mainstream news portrayals of MAGA supporters?
Richard—it’s true that not many people in the AI safety community are MAGA supporters, and that not many MAGA supporters are in the AI safety community.
The question is, why? Many on the Left, especially those involved in tech, have the stereotype that MAGA supporters are simply too stupid to understand AI safety issues. As a result, they simply haven’t bothered to reach out to the Right—and they socially ostracize and exclude anyone who seems to be on the Right.
Would Anthropic be excited to hire an overt MAGA supporter to join their AI safety team—however smart, competent, and committed they were? I doubt it.
You accused me of being ‘overly aggressive’. I was pointing out that tweets aren’t acts of aggression. Shooting people in the neck is.
As far as I can remember, I’ve never called for violence, on any topic, in any of the 80,000 posts I’ve shared on Twitter/X, to my 150,000 followers. So, I think your claim that my posts are ‘overly aggressive’ is poorly calibrated in relation to what actual aggression looks like.
That’s the relevance of the assassination of Charlie Kirk. A reminder that in this LessWrong bubble of ever-so-cautious, ever-so-rational, ever-so-epistemically-pure discourse, people can get very disconnected from the reality of high-stakes political debate and ideologically-driven terrorism.
Thanks for sharing. The archived version wasn’t up yet when I replied.
But I’m still uneasy using the Internet Archive to circumvent copyright.
‘Overly aggressive’ is what the shooter who just assassinated conservative Charlie Kirk was being.
Posting hot takes on X is not being ‘aggressive’.
This is not a day when I will tolerate any conflation of posting strong words on social media with committing actual aggressive violence.
This is not the day for that.
X (formerly known as Twitter) isn’t for ‘reasonable discourse’ according to the very specific and high epistemic standards of LessWrong.
X is for influence, persuasion, and impact. Which is exactly what AI safety advocates need, if we’re to have any influence, persuasion, or impact.
I’m comfortable using different styles, modes of discourse, and forms of outreach on X versus podcasts versus LessWrong versus my academic writing.
In related news, there’s an article in Financial Times yesterday about the tensions within the conservative movement concerned AI safety, as manifest at the National Conservatism conference last week: https://www.ft.com/content/d6aac7f1-b955-4c76-a144-1fe8d909f70b
It’s paywalled, and (unlike the AI industry) I don’t want to violate their copyright by reposting the text, but the title is:
‘Maga vs AI: Donald Trump’s Big Tech courtship risks a backlash
Silicon Valley’s sway in the White House is alarming populists in the president’s base’
Thanks for the tag. I’ve just started to read the comments here, and wrote an initial reply.
As the guy most quoted in this Verge article, it’s amusing to see so many LessWrong folks—who normally pride themselves on their epistemic integrity and open-mindedness—commenting with such overconfidence about my talk that they haven’t actually read or seen, at a conference they’ve never been to, which is grounded in a set of conservative values and traditionalist world-views that they know less than nothing about.
I’ll post the actual text of my talk in due course, after I can link to the NatCon video whenever it’s released. (My actual talk covered AI X-risk and the game theory of the US/China arms race in some detail).
For the moment, I’ll just say this: if we want to fight the pro-accelerationist guys who have a big influence on Trump at the moment, but who show total contempt for AI safety (e.g. David Sacks, Mark Andreessen), then we can do it effectively through the conservative influencers who are advocating for AI safety, an AI pause, AI regulation, and AI treaties.
The NatCons have substantial influence in Washington at the moment. If we actually care about AI safety more than we care about partisan politics or leftist virtue-signaling, it might be a good idea to engage with NatCons, learn about their views (with actual epistemic humility and curiosity), and find whatever common ground we can to fight against the reckless e/accs.
habryka—‘If you don’t care about future people’—but why would any sane person not care at all about future people?
You offer a bunch of speculative math about longevity vs extinction risk.
OK, why not run some actual analysis on which is more likely to promote longevity research: direct biomedical research on longevity, or indirect AI research on AGI in hopes that it somehow, speculatively, solves longevity?
The AI industry is currently spending something on the order of $200 billion a year on research. The biomedical research on longevity, by contrast, is currently far less than $10 billion a year.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
Excited to see this!
Well done to the ‘AI in Context’ team.
I’ll share the video on X.
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
TsviBT—thanks for a thoughtful comment.
I understand your point about labelling industries, actions, and goals as evil, but being cautious about labelling individuals as evil.
But I don’t think it’s compelling.
You wrote ‘You’re closing off lines of communication and gradual change. You’re polarizing things.’
Yes, I am. We’ve had open lines of communication between AI devs and AI safety experts for a decade. We’ve had pleas for gradual change. Mutual respect, and all that. Trying to use normal channels of moral persuasion. Well-intentioned EAs going to work inside the AI companies to try to nudge them in safer directions.
None of that has worked. AI capabilities development is outstripping AI safety developments at an ever-increasing rate. The financial temptations to stay working inside AI companies keep increasing, even as the X risks keep increasing. Timelines are getting shorter.
The right time to ‘polarize things’ is when we still have some moral and social leverage to stop reckless ASI development. The wrong time is after it’s too late.
Altman, Amodei, Hassabis, and Wang are buying people’s souls—paying them hundreds of thousands or millions of dollars a year to work on ASI development, despite most of their workers they supervise knowing that they’re likely to be increasing extinction risk.
This isn’t just a case of ‘collective evil’ being done by otherwise good people. This is a case of paying people so much that they ignore their ethical qualms about what they’re doing. That makes the evil very individual, and very specific. And I think that’s worth pointing out.
Sure. But if an AI company grows an ASI that extinguishes humanity, who is left to sue them? Who is left to prosecute them?
The threat of legal action for criminal negligence is not an effective deterrent if there is no criminal justice system left, because there is no human species left.
Well my toddler pronounces it ‘pee doom’