Brief notes on the Senate hearing on AI oversight

On May 16th, 2023 Sam Altman of OpenAI; Gary Marcus, professor at New York University, and Christina Montgomery, chief privacy and trust officer at IBM spoke to congress on topics related to AI regulation. A link to the hearing can be found here: Youtube: CNBC Senate hearing on AI oversight.

From a lens of AI Alignment, the general substance of the conversation focused on near term effects such as job loss, bias, harmful content, targeted advertising, privacy implications, election interference, IP and copyright issues and other similar topics. Sam Altman has spoken about hard AI risks before, but he was not explicit about them in the hearing. Gary Marcus communicated that his estimation for AGI at 50 years out, so his position on timelines is far out. There was an interesting moment where Gary Marcus called out for Sam to explicitly state his worst fears, but Sam did not explicitly say anything about x-risk and gave a broad vague answer: Twitter link.

A proposed mechanism for safety was the concept of a “Nutrition Label” or a “Data Sheet” summarizing what a model has been trained on. This seems like a misguided exercise given the vast amount of data the LLMs are trained on. Summarizing that volume of data is a difficult task, and most orgs keep their data sets private due to competitive reasons and potential copyright forward risk. I also find the premise that knowing some summarization of the training set to be predictive or informative of the capabilities, truth approximation and biases of large text models to be flawed.

Sam Altman, Gary Marcus and Christina Montgomery all asked for more regulation, with Sam Altman and Prof. Marcus asking for a new regulatory agency. There were some allusions to previous private conversations between the speakers and members of Congress in the hearing, so it seems likely that some very substantive lobbying for regulations is happening in a closed-door setting. For example, Section 230 was brought up multiple times, from a copyright and privacy perspective. Another alternative of requiring licensing to work on this technology was brought up.

Sen. John Kennedy explicitly called out an existential thread ”… a berserk wing of the artificial intelligence committee that intentionally or unintentionally could use AI to kill all of us and hurt us the entire time we’re dying” and asked the three testifying members to propose policies to prevent such risk. Prof. Marcus explicitly called out longer term risk and more funding for AI Safety, noting the mixed use of the term. Sam Altman mentioned regulation, licensing and tests for exfiltration and self replication.

Gary Marcus, like Sam Altman, seemed to be quite familiar with the general scope of existential threats, for example mentioning self-improvement capabilities. His timelines are very long, so he does not seem to have a short P(doom) timeline.

Generally, it seems that the trend in the hearing was toward regulating and preventing short term risks, potentially licensing and regulating the development of models to address immediate short term risks, with very little discussion about existential style risks of AGI. I hope that more questions like Sen. Kennedy’s rise up and a broader discussion about existential risk enters the public discourse on a congressional level.