I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”).
From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all.
Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)?
The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively.
But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things—like making sure the US has an excellent system to forecast risks from AI—then there’s a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly.
Appreciate your take here, Habryka.
I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”).
From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all.
Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)?
The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively.
But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things—like making sure the US has an excellent system to forecast risks from AI—then there’s a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly.