In this post, I mostly conflated “being a moderate” with “working with people at AI companies”. You could in principle be a moderate and work to impose extremely moderate regulations, or push for minor changes to the behavior of governments.
There’s also a “moderates vs radicals” when it comes to attitudes, certainty in one’s assumptions, and epistemics, rather than (currently-)favored policies. While some of the benefits you list are hard to get for people who are putting their weight behind interventions to bring about radical change, a lot of the listed benefits fit the theme of”keeping good incentives for your epistemics,” and so they might apply more broadly. E.g., we can imagine someone who is “moderate” in their attitudes, certainty in their assumptions, etc., but might still (if pressed) think that radical change is probably warranted.
For illusration, imagine I donate to Pause AI (or joined one of their protests with one of the more uncontroversial protest signs), but I still care a lot about what the informed people who are convinced of Anthropic’s strategy have to say. Imagine I don’t think they’re obviously unreasonable, I try to pass their Ideological Turing test, I care about whether they consider me well-informed, etc. If those conditions are met, then I might still retain some of the benefits you list.
I did this conflation mostly because I think that for small and inexpensive actions, you’re usually better off trying to make them happen by talking to companies or other actors directly (e.g. starting a non-profit to do the project) rather than trying to persuade uninformed people to make them happen. And cases where you push for minor changes to the behavior of governments have many of the advantages I described here: you’re doing work that substantially involves understanding a topic (e.g. the inner workings of the USG) that your interlocutors also understand well, and you spend a lot of your time responding to well-informed objections about the costs and benefits of some intervention.
What about the converse, the strategy for bringing about large and expensive changes? You not discussing that part makes it seem like you might agree with a picture where the way to attempt large and expensive changes is always to appeal to a mass audience (who will be comparatively uninformed). However, I think it’s it at least worth considering that promising ways towards pausing the AI race (or some other types of large-scale change) could go through convincing Anthropic’s leadership of problems in their strategy (or or more generally: through convincing some other powerful group of subject matter experts). To summarize whether radical change goes through mass advocacy and virality vs convincing specific highly-informed groups and experts, seems like somewhat of an open question and might depend on the specifics.
I really like this post.
There’s also a “moderates vs radicals” when it comes to attitudes, certainty in one’s assumptions, and epistemics, rather than (currently-)favored policies. While some of the benefits you list are hard to get for people who are putting their weight behind interventions to bring about radical change, a lot of the listed benefits fit the theme of”keeping good incentives for your epistemics,” and so they might apply more broadly. E.g., we can imagine someone who is “moderate” in their attitudes, certainty in their assumptions, etc., but might still (if pressed) think that radical change is probably warranted.
For illusration, imagine I donate to Pause AI (or joined one of their protests with one of the more uncontroversial protest signs), but I still care a lot about what the informed people who are convinced of Anthropic’s strategy have to say. Imagine I don’t think they’re obviously unreasonable, I try to pass their Ideological Turing test, I care about whether they consider me well-informed, etc. If those conditions are met, then I might still retain some of the benefits you list.
What about the converse, the strategy for bringing about large and expensive changes? You not discussing that part makes it seem like you might agree with a picture where the way to attempt large and expensive changes is always to appeal to a mass audience (who will be comparatively uninformed). However, I think it’s it at least worth considering that promising ways towards pausing the AI race (or some other types of large-scale change) could go through convincing Anthropic’s leadership of problems in their strategy (or or more generally: through convincing some other powerful group of subject matter experts). To summarize whether radical change goes through mass advocacy and virality vs convincing specific highly-informed groups and experts, seems like somewhat of an open question and might depend on the specifics.