[Question] What are the best arguments and/​or plans for doing work in “AI policy”?

I’m looking to get oriented in the space of “AI policy”: interventions that involve world governments (particularly the US government) and existential risk from strong AI.

When I hear people talk about “AI policy”, my initial reaction is skepticism, because (so far) I can think of very few actions that governments could take that seem to help with the core problems of AI ex-risk. However, I haven’t read much about this area, and I don’t know what actual policy recommendations people have in mind.

So what should I read to start? Can people link to plans and proposals in AI policy space?

Research papers, general interest web pages, and one’s own models, are all admissible.

Thanks.