Nice. I’m tentatively excited about this… are there any backfire risks? My impression was that the AI governance people didn’t know what to push for because of massive strategic uncertainty. But this seems like a good candidate for something they can do that is pretty likely to be non-negative? Maybe the idea is that if we think more we’ll find even better interventions and political capital should be conserved until then?
Nice. I’m tentatively excited about this… are there any backfire risks? My impression was that the AI governance people didn’t know what to push for because of massive strategic uncertainty. But this seems like a good candidate for something they can do that is pretty likely to be non-negative? Maybe the idea is that if we think more we’ll find even better interventions and political capital should be conserved until then?