It sounds like your view is that (say) a House with 5 legislators who are amazing on AI X-risk, 15 who seem like they’re kinda pretty good, and 415 others is actively worse than one with 5 amazing legislators and 430 others?
I think it’s quite possible 1 great /15 maybes is worse than 1⁄0, depending on how you define “seem like kinda pretty good”. Or put another way, I don’t trust the ecosystem to distinguish kinda pretty good from mildly moderately bad. Here are some ways someone who was nominally an AI safety advocate could end up being net harmful:
suck up resources better spent on other people. Money, airtime, staff...
Be offputting in a way that ends up tarring AI safety (I’m pretty worried that Scott Weiner’s woke reputation will pass on to AI safety).
Make the coordination harder. If you have 5 very smart people whose top priority is AI, you can pivot pretty quickly. If you have those 5 people, plus 15 pretty smart people who are invested enough to feel offended if not included but not enough to put in the necessary time, pivoting is much harder.
Pass mediocre or counterproductive legislation/regulation that eats up the public’s appetite for AI safety work.
I’m especially worried about regulatory capture masquerading as safety.
This is pretty sensitive to current conditions. If donors are inexhaustible, I care less about suboptimal distribution of money. Once you have a core that’s working productively (5 might be enough) you can support a second ring where the pretty good people can go without risk of them trying to steer.
On the other hand, we might want a policy of automatically supporting anyone opposing someone the pro-AI PACs support, since the counterfactual is worse.
I think it’s quite possible 1 great /15 maybes is worse than 1⁄0, depending on how you define “seem like kinda pretty good”. Or put another way, I don’t trust the ecosystem to distinguish kinda pretty good from
mildlymoderately bad. Here are some ways someone who was nominally an AI safety advocate could end up being net harmful:suck up resources better spent on other people. Money, airtime, staff...
Be offputting in a way that ends up tarring AI safety (I’m pretty worried that Scott Weiner’s woke reputation will pass on to AI safety).
Make the coordination harder. If you have 5 very smart people whose top priority is AI, you can pivot pretty quickly. If you have those 5 people, plus 15 pretty smart people who are invested enough to feel offended if not included but not enough to put in the necessary time, pivoting is much harder.
Pass mediocre or counterproductive legislation/regulation that eats up the public’s appetite for AI safety work.
I’m especially worried about regulatory capture masquerading as safety.
This is pretty sensitive to current conditions. If donors are inexhaustible, I care less about suboptimal distribution of money. Once you have a core that’s working productively (5 might be enough) you can support a second ring where the pretty good people can go without risk of them trying to steer.
On the other hand, we might want a policy of automatically supporting anyone opposing someone the pro-AI PACs support, since the counterfactual is worse.