Anton Leicht has a good extension of this model, saying that if neither voters nor donors prioritize an issue, then policy can be shaped by technocratic experts. Right after ChatGPT’s release, AI policy briefly looked like that. People wanted some kind of policy response, and the only people with detailed proposals were AI safety experts, so they set the agenda for a while (e.g. UK AI Safety Summit, international AISI network, voluntary corporate commitments to mitigate catastrophic risk, Biden EO). But that window was short. Once businesses saw that AI policy proposals could hurt their interests (e.g. during the SB 1047 debate or when discussing restrictions on open source), they stepped in. Now anti-regulation donors generally have the upper hand over the AI policy wonks, and it’ll take real pressure from voters or pro-safety donors to pass policies that the business interests oppose.
Anton Leicht has a good extension of this model, saying that if neither voters nor donors prioritize an issue, then policy can be shaped by technocratic experts. Right after ChatGPT’s release, AI policy briefly looked like that. People wanted some kind of policy response, and the only people with detailed proposals were AI safety experts, so they set the agenda for a while (e.g. UK AI Safety Summit, international AISI network, voluntary corporate commitments to mitigate catastrophic risk, Biden EO). But that window was short. Once businesses saw that AI policy proposals could hurt their interests (e.g. during the SB 1047 debate or when discussing restrictions on open source), they stepped in. Now anti-regulation donors generally have the upper hand over the AI policy wonks, and it’ll take real pressure from voters or pro-safety donors to pass policies that the business interests oppose.