As the guy most quoted in this Verge article, it’s amusing to see so many LessWrong folks—who normally pride themselves on their epistemic integrity and open-mindedness—commenting with such overconfidence about my talk that they haven’t actually read or seen, at a conference they’ve never been to, which is grounded in a set of conservative values and traditionalist world-views that they know less than nothing about.
I’ll post the actual text of my talk in due course, after I can link to the NatCon video whenever it’s released. (My actual talk covered AI X-risk and the game theory of the US/China arms race in some detail).
For the moment, I’ll just say this: if we want to fight the pro-accelerationist guys who have a big influence on Trump at the moment, but who show total contempt for AI safety (e.g. David Sacks, Mark Andreessen), then we can do it effectively through the conservative influencers who are advocating for AI safety, an AI pause, AI regulation, and AI treaties.
The NatCons have substantial influence in Washington at the moment. If we actually care about AI safety more than we care about partisan politics or leftist virtue-signaling, it might be a good idea to engage with NatCons, learn about their views (with actual epistemic humility and curiosity), and find whatever common ground we can to fight against the reckless e/accs.
Well, given the extent of what’s going on in the US, I think signaling has importance to people for a reason—it’s not just about being seen as virtuous, it’s about complicity or not in several things that are objectively illegal and only allowed right now because the upper tiers of power themselves are ignoring the rule of law.
But putting that aside on the grounds that even so, survival is at stake and more important even than liberal democracy, I would worry about the consistency of these kind of alliances. A lot of it seems grounded on purely ideological pet peeves, like the AIs being aligned with “woke” values. Grok is already an attempt to ditch that. Would any of this fervour survive the emergence of a single major AI that bends the knee and preaches reactionary gospel instead? Would Sam Altman not do that if it benefited the survival of his company? I doubt both. I think the main outcome of this would simply be we’ll have MAGA AI, then most of these voices will be satisfied and the ones who are left won’t be weighty enough, even assuming that now they would be.
dr_s: How many MAGA supporters have you actually talked with, about AI safety issues?
It sounds like you have a lot of views on what they may or may not believe. I’m not sure how well-calibrated your views are.
Do you have a decent sample size for making your generalizations based on real interactions with real people, or are your impressions based mostly on mainstream news portrayals of MAGA supporters?
How many MAGA supporters have you actually talked with, about AI safety issues?
I’m basing myself off the specific quotes that are reported in this very piece, and the general behaviour of MAGA towards other issues in the past. The complaints are very precise, and I only agree with a relative minority of them; many seem to boil down to “the AI is bad because it does not agree with me politically”. This is something easily changed, and has nothing to do with the deeper issues at the root of it, which makes it possible to appease a large swath of the dissent with interventions that have nothing to do with AI safety (not unlike how the liberal crowd can be appeased by making sure the AI is politically correct, something equally irrelevant to the bigger goals we’re talking about).
I’m sure there are individuals who would stick by their guns and are more principled. But that’s not very useful when discussing a political alliance with a movement at large. And the movement at large has proven again and again that it is driven by personal loyalty to Donald Trump over any specific hard ideological commitment. That gives it a single point of failure: if Donald Trump were to switch for whatever reason to “AI good”, suddenly a huge chunk of those allies would evaporate.
This is not a matter of “I would never ally with anyone whom I dislike politically on AI safety”. As I mentioned elsewhere, I would be ok with allying with groups whose main definitional ideology is religious. I would definitely ally with the Catholic Church over it, for example—them I trust to be fairly coherent on it. I would also be ok allying with US Christian groups if being Christian was their main driver . And to be sure there is some overlap here. But if we consider MAGA as a unit, then no; even putting aside the obvious non-AI issues I mentioned before, which at this point are large enough to make an alliance potentially distasteful to anyone who doesn’t have a fairly high P(doom) and is thus proportionately desperate, they have quite simply not shown themselves to be a reliable bunch of consistent views.
As the guy most quoted in this Verge article, it’s amusing to see so many LessWrong folks—who normally pride themselves on their epistemic integrity and open-mindedness—commenting with such overconfidence about my talk that they haven’t actually read or seen, at a conference they’ve never been to, which is grounded in a set of conservative values and traditionalist world-views that they know less than nothing about.
I’ll post the actual text of my talk in due course, after I can link to the NatCon video whenever it’s released. (My actual talk covered AI X-risk and the game theory of the US/China arms race in some detail).
For the moment, I’ll just say this: if we want to fight the pro-accelerationist guys who have a big influence on Trump at the moment, but who show total contempt for AI safety (e.g. David Sacks, Mark Andreessen), then we can do it effectively through the conservative influencers who are advocating for AI safety, an AI pause, AI regulation, and AI treaties.
The NatCons have substantial influence in Washington at the moment. If we actually care about AI safety more than we care about partisan politics or leftist virtue-signaling, it might be a good idea to engage with NatCons, learn about their views (with actual epistemic humility and curiosity), and find whatever common ground we can to fight against the reckless e/accs.
Well, given the extent of what’s going on in the US, I think signaling has importance to people for a reason—it’s not just about being seen as virtuous, it’s about complicity or not in several things that are objectively illegal and only allowed right now because the upper tiers of power themselves are ignoring the rule of law.
But putting that aside on the grounds that even so, survival is at stake and more important even than liberal democracy, I would worry about the consistency of these kind of alliances. A lot of it seems grounded on purely ideological pet peeves, like the AIs being aligned with “woke” values. Grok is already an attempt to ditch that. Would any of this fervour survive the emergence of a single major AI that bends the knee and preaches reactionary gospel instead? Would Sam Altman not do that if it benefited the survival of his company? I doubt both. I think the main outcome of this would simply be we’ll have MAGA AI, then most of these voices will be satisfied and the ones who are left won’t be weighty enough, even assuming that now they would be.
dr_s: How many MAGA supporters have you actually talked with, about AI safety issues?
It sounds like you have a lot of views on what they may or may not believe. I’m not sure how well-calibrated your views are.
Do you have a decent sample size for making your generalizations based on real interactions with real people, or are your impressions based mostly on mainstream news portrayals of MAGA supporters?
I’m basing myself off the specific quotes that are reported in this very piece, and the general behaviour of MAGA towards other issues in the past. The complaints are very precise, and I only agree with a relative minority of them; many seem to boil down to “the AI is bad because it does not agree with me politically”. This is something easily changed, and has nothing to do with the deeper issues at the root of it, which makes it possible to appease a large swath of the dissent with interventions that have nothing to do with AI safety (not unlike how the liberal crowd can be appeased by making sure the AI is politically correct, something equally irrelevant to the bigger goals we’re talking about).
I’m sure there are individuals who would stick by their guns and are more principled. But that’s not very useful when discussing a political alliance with a movement at large. And the movement at large has proven again and again that it is driven by personal loyalty to Donald Trump over any specific hard ideological commitment. That gives it a single point of failure: if Donald Trump were to switch for whatever reason to “AI good”, suddenly a huge chunk of those allies would evaporate.
This is not a matter of “I would never ally with anyone whom I dislike politically on AI safety”. As I mentioned elsewhere, I would be ok with allying with groups whose main definitional ideology is religious. I would definitely ally with the Catholic Church over it, for example—them I trust to be fairly coherent on it. I would also be ok allying with US Christian groups if being Christian was their main driver . And to be sure there is some overlap here. But if we consider MAGA as a unit, then no; even putting aside the obvious non-AI issues I mentioned before, which at this point are large enough to make an alliance potentially distasteful to anyone who doesn’t have a fairly high P(doom) and is thus proportionately desperate, they have quite simply not shown themselves to be a reliable bunch of consistent views.