@Zach Stein-Perlman, great work on this. I would be interested in you brainstorming some questions that have to do with the lab’s stances toward (government) AI policy interventions.
After a quick 5 min brainstorm, here are some examples of things that seem relevant:
I remember hearing that OpenAI lobbied against the EU AI Act– what’s up with that?
I heard a rumor that Congresspeople and their teams reached out to Sam/OpenAI after his testimony. They allegedly asked for OpenAI’s help to craft legislation around licensing, and then OpenAI refused. Is that true?
Sam said we might need an IAEA for AI at some point– what did he mean by this? At what point would he see that as valuable?
In general, what do labs think the US government should be doing? What proposals would they actively support or even help bring about? (Flagging ofc that there are concerns about actual and perceived regulatory capture, but there are also major advantages to having industry players support & contribute to meaningful regulation).
Senator Cory Booker recently asked Jack Clark something along the lines of “what is your top policy priority right now//what would you do if you were a Senator.” Jack responded with something along the lines of “I would make sure the government can deploy AI successfully. We need a testing regime to better understand risks, but the main risk is that we don’t use AI enough, and we need to make sure we stay at the cutting edge.” What’s up with that?
Why haven’t Dario and Jack made public statements about specific government interventions? Do they believe that there are some circumstances under which a moratorium would need to be implemented, labs would need to be nationalized (or internationalized), or something else would need to occur to curb race dynamics? (This could be asked to any of the lab CEOs/policy team leads– I don’t mean to be picking on Anthropic, though I think Sam/OpenAI have had more public statements here, and I think the other labs are scoring more poorly across the board//don’t fully buy into the risks in the first place.)
I imagine there’s a lot more in this general category of “labs and how they are interacting with governments and how they are contributing to broader AI policy efforts”, and I’d be excited to see AI Lab Watch (or just you) dive into this more.
I’m not sure what the theory of change for listing such questions is.
In the context of policy advocacy, think it’s sometimes fine/good for labs to say somewhat different things publicly vs privately. Like, if I was in charge of a lab and believed (1) the EU AI Act will almost certainly pass and (2) it has some major bugs that make my life harder without safety benefits, I might publicly say “I support (the goals of) the EU AI Act” and privately put some effort into removing those bugs, which is technically lobbying to weaken the Act.
(^I’m not claiming that particular labs did ~this rather than actually lobby against the Act. I just think it’s messy and regulation isn’t a one-dimensional thing that you’re for or against.)
Edit: this comment was misleading and partially replied to a strawman. I agree it would be good for the labs and their leaders to publicly say some things about recommended regulation (beyond what they already do) and their lobbying. I’m nervous about trying to litigate rumors for reasons I haven’t explained.
Right now, I think one of the most credible ways for a lab to show its committment to safety is through its engagement with governments.
I didn’t mean to imply that a lab should automatically be considered “bad” if its public advocacy and its private advocacy differ.
However, when assessing how “responsible” various actors are, I think investigating questions relating to their public comms, engagement with government, policy proposals, lobbying efforts, etc would be valuable.
If Lab A had slightly better internal governance but lab B had better effects on “government governance”, I would say that lab B is more “responsible” on net.
@Zach Stein-Perlman, great work on this. I would be interested in you brainstorming some questions that have to do with the lab’s stances toward (government) AI policy interventions.
After a quick 5 min brainstorm, here are some examples of things that seem relevant:
I remember hearing that OpenAI lobbied against the EU AI Act– what’s up with that?
I heard a rumor that Congresspeople and their teams reached out to Sam/OpenAI after his testimony. They allegedly asked for OpenAI’s help to craft legislation around licensing, and then OpenAI refused. Is that true?
Sam said we might need an IAEA for AI at some point– what did he mean by this? At what point would he see that as valuable?
In general, what do labs think the US government should be doing? What proposals would they actively support or even help bring about? (Flagging ofc that there are concerns about actual and perceived regulatory capture, but there are also major advantages to having industry players support & contribute to meaningful regulation).
Senator Cory Booker recently asked Jack Clark something along the lines of “what is your top policy priority right now//what would you do if you were a Senator.” Jack responded with something along the lines of “I would make sure the government can deploy AI successfully. We need a testing regime to better understand risks, but the main risk is that we don’t use AI enough, and we need to make sure we stay at the cutting edge.” What’s up with that?
Why haven’t Dario and Jack made public statements about specific government interventions? Do they believe that there are some circumstances under which a moratorium would need to be implemented, labs would need to be nationalized (or internationalized), or something else would need to occur to curb race dynamics? (This could be asked to any of the lab CEOs/policy team leads– I don’t mean to be picking on Anthropic, though I think Sam/OpenAI have had more public statements here, and I think the other labs are scoring more poorly across the board//don’t fully buy into the risks in the first place.)
Big tech is spending a lot of money on AI lobbying. How much is each lab spending (this is something you can estimate with publicly available data), and what are they actually lobbying for/against?
I imagine there’s a lot more in this general category of “labs and how they are interacting with governments and how they are contributing to broader AI policy efforts”, and I’d be excited to see AI Lab Watch (or just you) dive into this more.
Thanks. Briefly:
I’m not sure what the theory of change for listing such questions is.
In the context of policy advocacy, think it’s sometimes fine/good for labs to say somewhat different things publicly vs privately. Like, if I was in charge of a lab and believed (1) the EU AI Act will almost certainly pass and (2) it has some major bugs that make my life harder without safety benefits, I might publicly say “I support (the goals of) the EU AI Act” and privately put some effort into removing those bugs, which is technically lobbying to weaken the Act.
(^I’m not claiming that particular labs did ~this rather than actually lobby against the Act. I just think it’s messy and regulation isn’t a one-dimensional thing that you’re for or against.)
Edit: this comment was misleading and partially replied to a strawman. I agree it would be good for the labs and their leaders to publicly say some things about recommended regulation (beyond what they already do) and their lobbying. I’m nervous about trying to litigate rumors for reasons I haven’t explained.
Edit 2: based on https://corporateeurope.org/en/2023/11/byte-byte, https://time.com/6288245/openai-eu-lobbying-ai-act/, and background information, I believe that OpenAI, Microsoft, Google, and Meta privately lobbied to make the EU AI Act worse—especially by lobbying against rules for foundation models—and that this is inconsistent with OpenAI’s and Altman’s public statements.
Right now, I think one of the most credible ways for a lab to show its committment to safety is through its engagement with governments.
I didn’t mean to imply that a lab should automatically be considered “bad” if its public advocacy and its private advocacy differ.
However, when assessing how “responsible” various actors are, I think investigating questions relating to their public comms, engagement with government, policy proposals, lobbying efforts, etc would be valuable.
If Lab A had slightly better internal governance but lab B had better effects on “government governance”, I would say that lab B is more “responsible” on net.