Taking government positions to influence AI is being politically involved. The political impact it has is likely a lot higher than that of a small donation.
What’s the political impact of millions of rats/EA withholding being politically active so that maybe a couple of them can obtain a government policy position they wouldn’t have otherwise?
Generally, if you want to make a political impact by being politically active it makes sense to have a theory of change and pick actions based on that theory of change. If you just randomly copy the kind of things other people who want to make political change do like going to protest and making small donations, the impact is likely not going to be very large.
This reminds me of a discussion I had with someone from East Europe going to a government protest. I asked her what she was protesting and she said that there’s a complex situation and no good English language source that describes it. Writing an article (or talking someone else into write an article) for the Guardian’s Comment is Free to explain the decision would have done much more to affect political change than having another body at the protest, yet she did the easy thing of going to the protest instead of doing the EA thing of getting the article written.
So I don’t know who’s advising Trump and Vance on AI today. Is it an EA? Could an EA reasonably have replicated that person’s path if not? Could EA compliant advice get through to Trump or Vance? If the third question is no then op’s theory of change isn’t sound anyway.
Basically, you are saying that you don’t know what you are talking about. On the other hand, the person who started this post does know what they are talking about from talking with people in the AI governance space.
For effective political action, it’s useful to take insider information about how the process works seriously.
That’s not knows what they’re talking about, that’s has talked to people who sound like they know what they’re talking about. The epistemic status is clear about this so I’m not knocking OP. “Epistemic status: thing people have told me that seems right.” This is actually what taking hearsay from insiders seriously looks like.
This is with regards to specifically small donations, of under $100; taking $50 as the average small donation and assuming every EA makes political donations, 50 times 200,000 would equal $1 million of campaign contributions ($100,000 if we assume there are only 10x as many EAs as answered the survey).
That is enough to fully cover a small campaign or two, but it’s not clear to me whether, spread over many candidates as would happen in real life, even the higher number would make much of a difference to any of their races.
I would recommend EAs become more politically active not less politically active. We can just rebrand it as “working to influence AI policy by influencing election outcomes upstream of AI policy decisions” to respect the rule that all decisions must reduce from working on alignment.
Taking government positions to influence AI is being politically involved. The political impact it has is likely a lot higher than that of a small donation.
What’s the political impact of millions of rats/EA withholding being politically active so that maybe a couple of them can obtain a government policy position they wouldn’t have otherwise?
Note that this post does not encourage people to withhold being politically active, or to totally refrain from making political donations.
There are not millions of rats/EA.
Generally, if you want to make a political impact by being politically active it makes sense to have a theory of change and pick actions based on that theory of change. If you just randomly copy the kind of things other people who want to make political change do like going to protest and making small donations, the impact is likely not going to be very large.
This reminds me of a discussion I had with someone from East Europe going to a government protest. I asked her what she was protesting and she said that there’s a complex situation and no good English language source that describes it. Writing an article (or talking someone else into write an article) for the Guardian’s Comment is Free to explain the decision would have done much more to affect political change than having another body at the protest, yet she did the easy thing of going to the protest instead of doing the EA thing of getting the article written.
So I don’t know who’s advising Trump and Vance on AI today. Is it an EA? Could an EA reasonably have replicated that person’s path if not? Could EA compliant advice get through to Trump or Vance? If the third question is no then op’s theory of change isn’t sound anyway.
Basically, you are saying that you don’t know what you are talking about. On the other hand, the person who started this post does know what they are talking about from talking with people in the AI governance space.
For effective political action, it’s useful to take insider information about how the process works seriously.
That’s not knows what they’re talking about, that’s has talked to people who sound like they know what they’re talking about. The epistemic status is clear about this so I’m not knocking OP. “Epistemic status: thing people have told me that seems right.” This is actually what taking hearsay from insiders seriously looks like.
There were roughly 2000 respondents to the 2024 EA survey; if we assume that’s undercounting by a factor of 100, that would still only give us 200,000 EAs (and I expect that it’s really more like 10x, for 20,000).
This is with regards to specifically small donations, of under $100; taking $50 as the average small donation and assuming every EA makes political donations, 50 times 200,000 would equal $1 million of campaign contributions ($100,000 if we assume there are only 10x as many EAs as answered the survey).
That is enough to fully cover a small campaign or two, but it’s not clear to me whether, spread over many candidates as would happen in real life, even the higher number would make much of a difference to any of their races.
I would recommend EAs become more politically active not less politically active. We can just rebrand it as “working to influence AI policy by influencing election outcomes upstream of AI policy decisions” to respect the rule that all decisions must reduce from working on alignment.