There’s actually very little shooing! It can be a challenge to get on the calendar of the highest-ranking staffers, because they’re very busy, but once you’re in the room they’re almost always happy to chat. AI is a sexy topic, even in DC, and if you’re coming in with a public safety concern then that’s something interesting that they haven’t heard 100 times already.
I’ll talk more about this in the next post, but there’s a difference between “researching to identify a miracle strategy for more efficient advocacy” and “researching because it’s of solid academic interest.” The first option mostly involves learning a lot about politics! If you’re not intensely studying the preferences of particular Senators, their donors, their chiefs of staff, etc., then how can you hope to identify a strategy that will be an order of magnitude more politically effective than other strategies?
So, I would never criticize anyone for saying “Gosh, it doesn’t seem like existing strategies can succeed in the time available; let me look for a much better strategy.” However, that doesn’t seem to be the main purpose or effect of most AI governance research. They’re not explicitly considering the effectiveness of different strategies or proposing new political strategies; they’re just discussing the risks and benefits of various policies in a general way—and it’s not the quality of our policies that’s the bottleneck.
I’m personally still trying out different ways to potentially make the future better. I’m starting to think the best use of my skills is to find an idea unrelated to AI, get lucky and make some money (haha), and use that to help.
There’s actually very little shooing! It can be a challenge to get on the calendar of the highest-ranking staffers, because they’re very busy, but once you’re in the room they’re almost always happy to chat. AI is a sexy topic, even in DC, and if you’re coming in with a public safety concern then that’s something interesting that they haven’t heard 100 times already.
I’ll talk more about this in the next post, but there’s a difference between “researching to identify a miracle strategy for more efficient advocacy” and “researching because it’s of solid academic interest.” The first option mostly involves learning a lot about politics! If you’re not intensely studying the preferences of particular Senators, their donors, their chiefs of staff, etc., then how can you hope to identify a strategy that will be an order of magnitude more politically effective than other strategies?
So, I would never criticize anyone for saying “Gosh, it doesn’t seem like existing strategies can succeed in the time available; let me look for a much better strategy.” However, that doesn’t seem to be the main purpose or effect of most AI governance research. They’re not explicitly considering the effectiveness of different strategies or proposing new political strategies; they’re just discussing the risks and benefits of various policies in a general way—and it’s not the quality of our policies that’s the bottleneck.
Thanks, that makes a lot of sense.
I’m personally still trying out different ways to potentially make the future better. I’m starting to think the best use of my skills is to find an idea unrelated to AI, get lucky and make some money (haha), and use that to help.