[Epistemic status: puzzling something out, very uncertain, optimizing for epistemic legibility so I’m easy to argue with. All specific numbers are ass pulls]
In my ideal world, the anti-X-risk political ecosystem has an abundance of obviously high quality candidates. This doesn’t appear to be on the table.
Adjusting for realism, my favorite is probably “have a wide top of funnel for AI safety candidates, narrow sharply as they have time to demonstrate traits we can judge them on”. But this depends on having an ecosystem with good judgement, and I’m not sure that that’s actually realistic.
I think it’s probably pretty easy to identify the top political candidates. These are the people like Alex Bores, who have a track record of getting hard legislation through their legislature and leave public evidence of strongly understanding the issue. If I had to put a number on it I’d count us as indescribably lucky to have 5% of candidates be this obviously good.
It’s also pretty easy to identify the worst candidates, by track record or by the financial support of pro-AI PACs. Let’s optimistically put this at the bottom 50% of candidates.
I don’t expect getting the obviously great 5% elected/appointed to be enough to win on AI safety issues. I also low-confidence expect supporting the middle 45% uniformly to be worse than doing nothing. And not just because some of them are lying- but because useful legislation is such a narrow target, and lots of people mean well without having the skill to actually be helpful. There’s also the negative externalities of crowding out better candidates. Say we need the 80-95th percentile candidates to win. A candidate who competes for the same support, who is well meaning but less competent than the 80th percentile, is actively taking away from better candidates.*
[*You can come up with math where this is okay if resources are abundant and the lesser candidates are merely less good rather than actively bad, but I expect resources to be tight and doing more harm than good to be easy. You can also solve a lot of this problem with web-of-trust, but we need something that will scale]
In this world, it becomes critical to distinguish 80-95th percentile candidates from 50th-80th. Let’s optimistically assume this can happen, even if it hasn’t yet. In that exact state, how should I feel about assisting a 0-track record candidate? Or maybe, how should someone with no budget constraints think about it?
Logically, my guess is that starting to lay down fertilizer so the judges have something to judge once they’re in place is net helpful. Intuitively, I feel the opposite. If forced to justify this I will say things like “having a bunch of mid people with things to lose make it harder to create the skillful judging system”, but these might be rationalizations.
Some cruxes in this model:
Being net beneficial on AI safety via government work is a very narrow target.
It is easy to do net harm to your sincerely held goals.
If there is a money firehose, people will mouth the words needing to access it, regardless of their actual beliefs or intentions.
^ has significant costs beyond the mere loss of money.
Judging the impact of politicians and appointees is extremely difficult even when you have all the information.
Most of the relevant information will be hidden.
Bringing on too many candidates too quickly, before a judgement apparatus is set up, will harm the ability to set up the judgement apparatus.
The value of having 1 legislator who Gets It is vastly higher than the value of having 0 legislators who Get It, because that legislator can introduce bills. Alex Bores and Scott Wiener are the perfect examples of this; it didn’t take a majority of their state legislatures to Get It in order to get some kind of AI regulation bill passed, but there would not have been a comparably good bill on the table without them.
Yeah it’s possible all you need is a few high powered people who Get It and a good ecosystem for lobbying everyone else, but then you have to evaluate the lobbyists.
Did you have a different vision for how to get really good AI X-risk legislation passed?
I’d interpreted your post has already implicitly sharing something like orthonomal’s view, since I took you to be arguing that we should prioritize getting a small number of legislators who really Get It.
I don’t think we used high-powered lobbyists in NY or CA (someone correct me if I’m wrong); their legislators already wanted to regulate the big AI companies, and they (and their staffers) are smart enough to distinguish it from the sloppy AI bills they usually see.
At the federal level, both Dems and the GOP want to go after the big AI companies, and I believe there’s a bill with teeth that almost all of Congress would privately agree with. The problem is that the anti-regulation lobby already has Trump in their pocket, so they just need to buy one-third of the Senate to stop a veto override. MAGA senators are the most obvious targets, because Republicans don’t remain in office for long if they feud with Trump.
their legislators already wanted to regulate the big AI companies, and they (and their staffers) are smart enough to distinguish it from the sloppy AI bills they usually see
This got me thinking: what’s the marginal return on placing or educating staffers, as opposed to electing a believer?
It sounds like your view is that (say) a House with 5 legislators who are amazing on AI X-risk, 15 who seem like they’re kinda pretty good, and 415 others is actively worse than one with 5 amazing legislators and 430 others?
I’m not sure why you think this. I’d think that most of the ways in which the pretty good legislators could be disappointing would make them more similar to the 415 others, or less influential, rather than actively worse. And often it would still be somewhat helpful to have them in Congress, e.g. they’d generally be more likely than random legislators to vote for a good AI bill that has a chance at becoming law.
One big way it could backfire to have a pretty-good-seeming legislator in the house is if they become a leading voice on AI while having misguided views on AI. But the concern about candidates who have a combination of prioritizing AI, being very competent, and having misguided views on AI feels different than just having extremely high standards for amazingness on AI X-risk.
It sounds like your view is that (say) a House with 5 legislators who are amazing on AI X-risk, 15 who seem like they’re kinda pretty good, and 415 others is actively worse than one with 5 amazing legislators and 430 others?
I think it’s quite possible 1 great /15 maybes is worse than 1⁄0, depending on how you define “seem like kinda pretty good”. Or put another way, I don’t trust the ecosystem to distinguish kinda pretty good from mildly moderately bad. Here are some ways someone who was nominally an AI safety advocate could end up being net harmful:
suck up resources better spent on other people. Money, airtime, staff...
Be offputting in a way that ends up tarring AI safety (I’m pretty worried that Scott Weiner’s woke reputation will pass on to AI safety).
Make the coordination harder. If you have 5 very smart people whose top priority is AI, you can pivot pretty quickly. If you have those 5 people, plus 15 pretty smart people who are invested enough to feel offended if not included but not enough to put in the necessary time, pivoting is much harder.
Pass mediocre or counterproductive legislation/regulation that eats up the public’s appetite for AI safety work.
I’m especially worried about regulatory capture masquerading as safety.
This is pretty sensitive to current conditions. If donors are inexhaustible, I care less about suboptimal distribution of money. Once you have a core that’s working productively (5 might be enough) you can support a second ring where the pretty good people can go without risk of them trying to steer.
On the other hand, we might want a policy of automatically supporting anyone opposing someone the pro-AI PACs support, since the counterfactual is worse.
And not just because some of them are lying- but because useful legislation is such a narrow target, and lots of people mean well without having the skill to actually be helpful
I’m not quite sure which skills you were referring to here, but, some thoughts;
I don’t expect most (good) senators to really be skilled at crafting policy that helps with x-risk, it’s not really their job. What they need to be good at is knowing who to defer to.
One thing I think they need is to know about Legible vs. Illegible AI Safety Problems, and to track that there are going to be illegible problems that are not easy to articulate and they themselves might not understand. (But, somehow also not be vulnerable to any random impressive sounding guy with an illegible problems they assure is important)
Realistically, the way I expect to deal with illegible problems is to convert them into legible problems, so maybe this doesn’t matter that much.
For politicians in particular I mean skills like “knowing who to defer to” and “horse trading to get the bill passed without losing critical parts”.
But thinking about the ecosystem as a whole, writing and nursing good legislation is also a skill I don’t particularly know how to evaluate, which means I couldn’t evaluate PACs or lobbyists even if I had perfect knowledge of their actions and thoughts.
I’m confused about this model. You need A) leaders to suggest/champion good legislation, and then B) enough legislators to actually pass said legislation, no? So what’s the point of having A without B? I suppose in your model, bad B’s worsen the good legislation suggested by the A’s, but I don’t see an in-principle way to resolve that problem in a majoritarian legislature rather than a one-person dictatorship. How do you go from having great A’s but no B’s, to getting useful legislation signed into law?
Sure, but I meant that you still need the votes of those other people, too. And the fewer votes you have, the more compromises make it into the final bill.
I think it’s probably pretty easy to identify the top political candidates. These are the people like Alex Bores, who have a track record of getting hard legislation through their legislature and leave public evidence of strongly understanding the issue TODO LINK
[Epistemic status: puzzling something out, very uncertain, optimizing for epistemic legibility so I’m easy to argue with. All specific numbers are ass pulls]
In my ideal world, the anti-X-risk political ecosystem has an abundance of obviously high quality candidates. This doesn’t appear to be on the table.
Adjusting for realism, my favorite is probably “have a wide top of funnel for AI safety candidates, narrow sharply as they have time to demonstrate traits we can judge them on”. But this depends on having an ecosystem with good judgement, and I’m not sure that that’s actually realistic.
I think it’s probably pretty easy to identify the top political candidates. These are the people like Alex Bores, who have a track record of getting hard legislation through their legislature and leave public evidence of strongly understanding the issue. If I had to put a number on it I’d count us as indescribably lucky to have 5% of candidates be this obviously good.
It’s also pretty easy to identify the worst candidates, by track record or by the financial support of pro-AI PACs. Let’s optimistically put this at the bottom 50% of candidates.
I don’t expect getting the obviously great 5% elected/appointed to be enough to win on AI safety issues. I also low-confidence expect supporting the middle 45% uniformly to be worse than doing nothing. And not just because some of them are lying- but because useful legislation is such a narrow target, and lots of people mean well without having the skill to actually be helpful. There’s also the negative externalities of crowding out better candidates. Say we need the 80-95th percentile candidates to win. A candidate who competes for the same support, who is well meaning but less competent than the 80th percentile, is actively taking away from better candidates.*
[*You can come up with math where this is okay if resources are abundant and the lesser candidates are merely less good rather than actively bad, but I expect resources to be tight and doing more harm than good to be easy. You can also solve a lot of this problem with web-of-trust, but we need something that will scale]
In this world, it becomes critical to distinguish 80-95th percentile candidates from 50th-80th. Let’s optimistically assume this can happen, even if it hasn’t yet. In that exact state, how should I feel about assisting a 0-track record candidate? Or maybe, how should someone with no budget constraints think about it?
Logically, my guess is that starting to lay down fertilizer so the judges have something to judge once they’re in place is net helpful. Intuitively, I feel the opposite. If forced to justify this I will say things like “having a bunch of mid people with things to lose make it harder to create the skillful judging system”, but these might be rationalizations.
Some cruxes in this model:
Being net beneficial on AI safety via government work is a very narrow target.
It is easy to do net harm to your sincerely held goals.
If there is a money firehose, people will mouth the words needing to access it, regardless of their actual beliefs or intentions.
^ has significant costs beyond the mere loss of money.
Judging the impact of politicians and appointees is extremely difficult even when you have all the information.
Most of the relevant information will be hidden.
Bringing on too many candidates too quickly, before a judgement apparatus is set up, will harm the ability to set up the judgement apparatus.
The value of having 1 legislator who Gets It is vastly higher than the value of having 0 legislators who Get It, because that legislator can introduce bills. Alex Bores and Scott Wiener are the perfect examples of this; it didn’t take a majority of their state legislatures to Get It in order to get some kind of AI regulation bill passed, but there would not have been a comparably good bill on the table without them.
Yeah it’s possible all you need is a few high powered people who Get It and a good ecosystem for lobbying everyone else, but then you have to evaluate the lobbyists.
Did you have a different vision for how to get really good AI X-risk legislation passed?
I’d interpreted your post has already implicitly sharing something like orthonomal’s view, since I took you to be arguing that we should prioritize getting a small number of legislators who really Get It.
I don’t think I understand how legislation is crafted and passed well enough to form a vision, and don’t have anyone to defer to either.
I don’t think we used high-powered lobbyists in NY or CA (someone correct me if I’m wrong); their legislators already wanted to regulate the big AI companies, and they (and their staffers) are smart enough to distinguish it from the sloppy AI bills they usually see.
At the federal level, both Dems and the GOP want to go after the big AI companies, and I believe there’s a bill with teeth that almost all of Congress would privately agree with. The problem is that the anti-regulation lobby already has Trump in their pocket, so they just need to buy one-third of the Senate to stop a veto override. MAGA senators are the most obvious targets, because Republicans don’t remain in office for long if they feud with Trump.
This got me thinking: what’s the marginal return on placing or educating staffers, as opposed to electing a believer?
It sounds like your view is that (say) a House with 5 legislators who are amazing on AI X-risk, 15 who seem like they’re kinda pretty good, and 415 others is actively worse than one with 5 amazing legislators and 430 others?
I’m not sure why you think this. I’d think that most of the ways in which the pretty good legislators could be disappointing would make them more similar to the 415 others, or less influential, rather than actively worse. And often it would still be somewhat helpful to have them in Congress, e.g. they’d generally be more likely than random legislators to vote for a good AI bill that has a chance at becoming law.
One big way it could backfire to have a pretty-good-seeming legislator in the house is if they become a leading voice on AI while having misguided views on AI. But the concern about candidates who have a combination of prioritizing AI, being very competent, and having misguided views on AI feels different than just having extremely high standards for amazingness on AI X-risk.
I think it’s quite possible 1 great /15 maybes is worse than 1⁄0, depending on how you define “seem like kinda pretty good”. Or put another way, I don’t trust the ecosystem to distinguish kinda pretty good from
mildlymoderately bad. Here are some ways someone who was nominally an AI safety advocate could end up being net harmful:suck up resources better spent on other people. Money, airtime, staff...
Be offputting in a way that ends up tarring AI safety (I’m pretty worried that Scott Weiner’s woke reputation will pass on to AI safety).
Make the coordination harder. If you have 5 very smart people whose top priority is AI, you can pivot pretty quickly. If you have those 5 people, plus 15 pretty smart people who are invested enough to feel offended if not included but not enough to put in the necessary time, pivoting is much harder.
Pass mediocre or counterproductive legislation/regulation that eats up the public’s appetite for AI safety work.
I’m especially worried about regulatory capture masquerading as safety.
This is pretty sensitive to current conditions. If donors are inexhaustible, I care less about suboptimal distribution of money. Once you have a core that’s working productively (5 might be enough) you can support a second ring where the pretty good people can go without risk of them trying to steer.
On the other hand, we might want a policy of automatically supporting anyone opposing someone the pro-AI PACs support, since the counterfactual is worse.
I’m not quite sure which skills you were referring to here, but, some thoughts;
I don’t expect most (good) senators to really be skilled at crafting policy that helps with x-risk, it’s not really their job. What they need to be good at is knowing who to defer to.
One thing I think they need is to know about Legible vs. Illegible AI Safety Problems, and to track that there are going to be illegible problems that are not easy to articulate and they themselves might not understand. (But, somehow also not be vulnerable to any random impressive sounding guy with an illegible problems they assure is important)
Realistically, the way I expect to deal with illegible problems is to convert them into legible problems, so maybe this doesn’t matter that much.
For politicians in particular I mean skills like “knowing who to defer to” and “horse trading to get the bill passed without losing critical parts”.
But thinking about the ecosystem as a whole, writing and nursing good legislation is also a skill I don’t particularly know how to evaluate, which means I couldn’t evaluate PACs or lobbyists even if I had perfect knowledge of their actions and thoughts.
I found this quite helpful, thank you!
I’m actually a bit surprised these frames were new for you-in-particular, curious which bits were helpful?
I’m confused about this model. You need A) leaders to suggest/champion good legislation, and then B) enough legislators to actually pass said legislation, no? So what’s the point of having A without B? I suppose in your model, bad B’s worsen the good legislation suggested by the A’s, but I don’t see an in-principle way to resolve that problem in a majoritarian legislature rather than a one-person dictatorship. How do you go from having great A’s but no B’s, to getting useful legislation signed into law?
AIUI, Bores and Weiner were the sole champions of their bill. Other people voted for them, obviously, but they didn’t advocate for them.
Sure, but I meant that you still need the votes of those other people, too. And the fewer votes you have, the more compromises make it into the final bill.
A link would actually be nice.
D’oh. I’m looking for it.
Edit: https://nystateassembly.granicus.com/player/clip/9124?meta_id=260666