Tl;dr: Alex Bores gave a 90 minute Q+A defending[1] the RAISE bill in the New York State Assembly. I watched/read the entire thing and wrote up some highlights below.
Context: I feel very confused about how to even evaluate politicians, given how much of the real work goes on in private and the incentives to lie. A fundraiser recommended I check out this video as something public that demonstrated Bores deep understanding of AI safety. This is still performance art- my understanding is everyone has made up their mind on how they will vote before the cameras start rolling- but he indeed seemed to be incredibly fluid, have reasonably deep models of the threat, and to be knowledgeable about the regulatory landscape across the US.
Highlights below. Unless in quote marks, statements are punched up for readability and entertainment value. For calibration or for especially surprising statements, I’ve included footnotes with full quotes verbatim from the transcript for some remarks:
Bores has a vibe of being in control and having a deep understanding of the situation, which the transcript doesn’t capture.
Bores repeatedly addressed concerns about regulatory burden by saying that frontier AI developers’ own memos [see correction in comments] citing an opposition memo that said this bill would add 1 full time employee, and so wasn’t that burdensome.
I’d be surprised if this was true and very surprised if it’s what frontier developers said even if it was true, given their incentives. I’m waiting to hear back on the memo
Given that the memo was written by some guy, I think he treats it with too much authority.
Bores repeatedly cited developers’ own cries for regulation as evidence his bill was necessary. To the extent that changed anyone’s mind, it would make those cries useful, even if the people who made them never intended to act on them.
Jacob Blumencraz: so this legislation doesn’t care if AI kills 87 people? [the bill’s threshold for caring is 100 people] Bores: this bill is about prospective harms; harms that have already happened are covered by existing laws. Blumencraz: but isn’t 100 people arbitrary Bores: yes that’s how numbers in legislation work[2]
Brian Maher, clearly reading questions someone else gave him: would this bill penalize running a spreadsheet to predict something based on name, given that name may be correlated with ethnicity? Bores: on so many different levels, no[3]
Daniel Norber: Doesn’t regulation increase the publics fear of a Terminator 2 scenario?[4] Can’t we rely on federal regulation by existing agencies? Bores: “great question for whichever party is currently controlling these agencies.”
Steven Otis: This bill is good and I like it
Mary Beth Walsh: surely we’d prefer a federal solution? Bores: yup, that’d be nice Walsh: does this bill help or hinder AI development in NYS? Bores: help because it’s saving them from themselves
Lester Chang: won’t someone please think of the fashion industry?[5] Bores: yeah we are more concerned about the bioterrorism Chang: “the only thing I can see that can endanger us [from AI] is scamming and stealing our secrets and money” Bores: I’m really excited for the cybersecurity defense AI will enable
Michael Novakhov: what problems is this bill trying to prevent? Bors: Bioterrorism. Also, did you know these things will sometimes refuse to be shut down? To the point of blackmailing developers? That seems pretty bad.
MR. BLUMENCRANZ: So, I mean, almost so much so that I—I was curious in some situations because you do include critical harm means death or serious injury of 100 or more people or at least $1 billion in damages to rights, money or property, et cetera. If a system that is—would qualify under this as a large-scale Frontier model were to commit a horrible crime like take down a plane. There were 87 passengers. They would not—that would not be considered a critical harm, and thus, they—you know, how would that affect them versus maybe a plane that went down with 110 people?
MR. BORES :So—so common law juris prudence, right, would already handle any questions on after-the-fact liability.What this is saying is trying to be specific to the most extreme cases are those the ones that developers need to plan for, or to plan against, I should say, and develop tests in order to prevent. We had to draw the line somewhere. What we are trying to make clear is that we’re really talking about the very extreme versions. We’re not saying they need to plan for very potential use of their models.
MR. BLUMENCRANZ: So it’s not arbitrary, but it’s a little arbitrary.
MR. BORES: Well, any time you choose any number, right, you’re making a choice. So yes, we’ve made a choice on a specific number here, but it is meant to point to the extreme cases.
MR. MAHER: Okay. A couple things. Let’s start with machine learning—machine system analysis. So just to give you a hypothetical—and I—I know you talked about a couple of different larger businesses and only certain companies being subject to this bill. But when we talk about things like algorithms and, you know, general machine learning, I think of just regular computer software. Like, let’s say an Excel spreadsheet. So I know you have liabilities in here and penalties in here. So let’s say I’m using an Excel spreadsheet and I ask it to compute, in alphabetical order, a list of 500 companies, and by name—let’s say it’s by last name. It goes from X to Z. Well, theoretically, you could be discriminating against an Asian population or a certain type of population that has those letters that start with that alphabet. Would they have punitive liability?
MR. BORES: No. First of all, it’s—there’s—again, there’s no new PRA as part of this. Second of all, Excel wouldn’t meet the general definition of artificial intelligence to sort algorithms. It’s not something that would meet that. Third of all, it’s certainly not a Frontier model. It’s not 1026 FLOPs and spending 100 million all on its training. And fourth, there’s no bias or discrimination clauses anywhere in this bill. This is just focused—the reason it’s focused on the largest models and the reason it’s focused on the largest companies is that we are really pinpointing the extreme potentially bad outcomes from artificial intelligence development. That is all this bill is focused on. Not bias or discrimination or any of those other problems which are real problems and we should tackle, but are the subject of other legislation.
MR. NORBER: Because we know since the infancy of AI that there’s been a lot of debate about what will be the final outcome of this. Will this be a Terminator 2 scenario or will this be the best thing that ever happened to humanity? So I know that we should take into concern that we don’t wanna scare anybody with this bill. We’re saying here, now we have New York State admitting that we need to protect ourselves from biological warfare or whatnot. So are there other states that are agreeing to this type of regulation or legislation?
Because from my training from what—what I understand in AI and cyber, because critical infrastructure is based on information. And as is AI machine-learning itself and Frontier is the most advanced model currently right now, unless they change another—another—another label. But all I can see from large institution doing—probably not doing anything nefarious, they want to sell to consumers or to businesses to hopefully to enhance their—their profit. I mean, I can see AI into market. I can see into a fashion industry. I can see into audio-visual because they can manipulate animation. Okay? But I don’t see deaths.
Bores repeatedly addressed concerns about regulatory burden by saying that frontier AI developers’ own memos said this bill would add 1 full time employee, and so wasn’t that burdensome.
I’d be surprised if this was true and very surprised if it’s what frontier developers said even if it was true, given their incentives. I’m waiting to hear back on the memo
As far as I can tell, Bores never said that frontier AI developers’ own memos said this; rather, it was that an opposition memo said this. Bores mentions this memo a few times during the 90 minutes; here’s a typical quote:
I’ll note that what came in as an opposition memo said that they estimated that this would require one full-time employee to comply with.
I believe that this is the memo that Bores was talking about. It was written by Will Rinehart of the American Entreprise Institute, which opposed the bill.
Thanks. It was ambiguous who Bores meant by “they” in the Q+A but now that I’ve seen the memo I think you’re right. I had a request in to Bores office but didn’t hear back until after publication. AFAICT this is written by some guy, in which case it seems like Bores gives it too much weight.
Unless in quote marks, statements are punched up for readability and entertainment value
Your editorial discretion was pretty funny. I liked it, and also appreciated that you called it out explicitly so that I could better track the rehotrical influence on me.
Tl;dr: Alex Bores gave a 90 minute Q+A defending[1] the RAISE bill in the New York State Assembly. I watched/read the entire thing and wrote up some highlights below.
Context: I feel very confused about how to even evaluate politicians, given how much of the real work goes on in private and the incentives to lie. A fundraiser recommended I check out this video as something public that demonstrated Bores deep understanding of AI safety. This is still performance art- my understanding is everyone has made up their mind on how they will vote before the cameras start rolling- but he indeed seemed to be incredibly fluid, have reasonably deep models of the threat, and to be knowledgeable about the regulatory landscape across the US.
Highlights below. Unless in quote marks, statements are punched up for readability and entertainment value. For calibration or for especially surprising statements, I’ve included footnotes with full quotes verbatim from the transcript for some remarks:
Bores has a vibe of being in control and having a deep understanding of the situation, which the transcript doesn’t capture.
Bores repeatedly addressed concerns about regulatory burden by saying that
frontier AI developers’ own memos[see correction in comments] citing an opposition memo that said this bill would add 1 full time employee, and so wasn’t that burdensome.I’d be surprised if this was true and very surprised if it’s what frontier developers said even if it was true, given their incentives. I’m waiting to hear back on the memoGiven that the memo was written by some guy, I think he treats it with too much authority.
Bores repeatedly cited developers’ own cries for regulation as evidence his bill was necessary. To the extent that changed anyone’s mind, it would make those cries useful, even if the people who made them never intended to act on them.
Jacob Blumencraz: so this legislation doesn’t care if AI kills 87 people? [the bill’s threshold for caring is 100 people]
Bores: this bill is about prospective harms; harms that have already happened are covered by existing laws.
Blumencraz: but isn’t 100 people arbitrary
Bores: yes that’s how numbers in legislation work[2]
Brian Maher, clearly reading questions someone else gave him: would this bill penalize running a spreadsheet to predict something based on name, given that name may be correlated with ethnicity?
Bores: on so many different levels, no[3]
Daniel Norber: Doesn’t regulation increase the publics fear of a Terminator 2 scenario?[4] Can’t we rely on federal regulation by existing agencies?
Bores: “great question for whichever party is currently controlling these agencies.”
Steven Otis: This bill is good and I like it
Mary Beth Walsh: surely we’d prefer a federal solution?
Bores: yup, that’d be nice
Walsh: does this bill help or hinder AI development in NYS?
Bores: help because it’s saving them from themselves
Lester Chang: won’t someone please think of the fashion industry?[5]
Bores: yeah we are more concerned about the bioterrorism
Chang: “the only thing I can see that can endanger us [from AI] is scamming and stealing our secrets and money”
Bores: I’m really excited for the cybersecurity defense AI will enable
Michael Novakhov: what problems is this bill trying to prevent?
Bors: Bioterrorism. Also, did you know these things will sometimes refuse to be shut down? To the point of blackmailing developers? That seems pretty bad.
Starting around 8:36, or skip to the first mention of Bores in the index
MR. BLUMENCRANZ: So, I mean, almost so much so that I—I was curious in some situations because you do include critical harm means death or serious injury of 100 or more people or at least $1 billion in damages to rights, money or property, et cetera. If a system that is—would qualify under this as a large-scale Frontier model were to commit a horrible crime like take down a plane. There were 87 passengers. They would not—that would not be considered a critical harm, and thus, they—you know, how would that affect them versus maybe a plane that went down with 110 people?
MR. BORES :So—so common law juris prudence, right, would already handle any questions on after-the-fact liability.What this is saying is trying to be specific to the most extreme cases are those the ones that developers need to plan for, or to plan against, I should say, and develop tests in order to prevent. We had to draw the line somewhere. What we are trying to make clear is that we’re really talking about the very extreme versions. We’re not saying they need to plan for very potential use of their models.
MR. BLUMENCRANZ: So it’s not arbitrary, but it’s a little arbitrary.
MR. BORES: Well, any time you choose any number, right, you’re making a choice. So yes, we’ve made a choice on a specific number here, but it is meant to point to the extreme cases.
MR. MAHER: Okay. A couple things. Let’s start with machine learning—machine system analysis. So just to give you a hypothetical—and I—I know you talked about a couple of different larger businesses and only certain companies being subject to this bill. But when we talk about things like algorithms and, you know, general machine learning, I think of just regular computer software. Like, let’s say an Excel spreadsheet. So I know you have liabilities in here and penalties in here. So let’s say I’m using an Excel spreadsheet and I ask it to compute, in alphabetical order, a list of 500 companies, and by name—let’s say it’s by last name. It goes from X to Z. Well, theoretically, you could be discriminating against an Asian population or a certain type of population that has those letters that start with that alphabet. Would they have punitive liability?
MR. BORES: No. First of all, it’s—there’s—again, there’s no new PRA as part of this. Second of all, Excel wouldn’t meet the general definition of artificial intelligence to sort algorithms. It’s not something that would meet that. Third of all, it’s certainly not a Frontier model. It’s not 1026 FLOPs and spending 100 million all on its training. And fourth, there’s no bias or discrimination clauses anywhere in this bill. This is just focused—the reason it’s focused on the largest models and the reason it’s focused on the largest companies is that we are really pinpointing the extreme potentially bad outcomes from artificial intelligence development. That is all this bill is focused on. Not bias or discrimination or any of those other problems which are real problems and we should tackle, but are the subject of other legislation.
MR. NORBER: Because we know since the infancy
of AI that there’s been a lot of debate about what will be the final
outcome of this. Will this be a Terminator 2 scenario or will this be
the best thing that ever happened to humanity? So I know that we
should take into concern that we don’t wanna scare anybody with this
bill. We’re saying here, now we have New York State admitting that
we need to protect ourselves from biological warfare or whatnot. So
are there other states that are agreeing to this type of regulation or
legislation?
Because from my training from what—what I understand in AI and cyber, because critical infrastructure is based on information. And as is AI machine-learning itself and Frontier is the most advanced model currently right now, unless they change another—another—another label. But all I can see from large institution doing—probably not doing anything nefarious, they want to sell to consumers or to businesses to hopefully to enhance their—their profit. I mean, I can see AI into market. I can see into a fashion industry. I can see into audio-visual because they can manipulate animation. Okay? But I don’t see deaths.
As far as I can tell, Bores never said that frontier AI developers’ own memos said this; rather, it was that an opposition memo said this. Bores mentions this memo a few times during the 90 minutes; here’s a typical quote:
I believe that this is the memo that Bores was talking about. It was written by Will Rinehart of the American Entreprise Institute, which opposed the bill.
Thanks. It was ambiguous who Bores meant by “they” in the Q+A but now that I’ve seen the memo I think you’re right. I had a request in to Bores office but didn’t hear back until after publication. AFAICT this is written by some guy, in which case it seems like Bores gives it too much weight.
Your editorial discretion was pretty funny. I liked it, and also appreciated that you called it out explicitly so that I could better track the rehotrical influence on me.