I don’t know Bores personally. I looked through some of his communications and social media, most of it seemed reasonable (I noticed his Twitter has an unusually small amount of mud-slinging). I did see one thread with some troubling comments:
This bill [SB 53] recognizes that in order to win the AI race, our AI needs to be both safe and trustworthy.
In this case, pro-safety is the pro-innovation position.
[...]
As a New Yorker, I have to point out that SB53 includes a cloud compute cluster & @GavinNewsom
said in his signing memo “The future happens [in CA] first”
...but @KathyHochul established EmpireAI in April 2024. So, thanks to our Gov’s vision, the future actually happens in NY 😉
Why I find this troubling:
Bores seems to want to race to build AI. Racing shortens timelines and decreases safety.
“pro-safety is the pro-innovation position” seems false? If AI companies maximize profit by being safe, then they’d do it without regulation, so why would we need regulation? If they don’t maximize profit by being safe, then pro-safety is not (maximally) pro-innovation.
I think our best hope for survival is that governments become sufficiently aware of the danger of AI that they agree to ban frontier AI development until we can figure out how to make it safe. If Bores is indeed pro-innovation on AI, then he would presumably oppose such a ban. My guess is the average Democrat would be basically fine with banning frontier AI if the political winds shifted that way, but Bores would have a more strongly-held stance, in which case he would be worse than the average Democrat (but still probably better than the average Republican).
He calls out New York directing funding to a new AI research lab as if that’s a good thing, which I don’t think it is. (I don’t actually know what EmpireAI is doing, I looked at their website but it doesn’t really say anything, it says they only fund “responsible” research, but I really don’t trust them to know what qualifies as responsible.)
Politicans are often pressured to say those sorts of things, so perhaps he would still support an AI pause if it became politically feasible. So these comments aren’t overwhelmingly troubling. But they’re troubling.
If those quotes accurately reflect his stance on AI innovation and arms races, then he might still be better than the average Democrat if the increased chance of getting weak-to-moderate AI safety regulations outweighs the decreased chance of getting strong regulations, but it’s unclear to me.
I will note that this was the only worrying comment I saw from Bores, although I didn’t find many comments on AI safety.
“pro-safety is the pro-innovation position” seems false? If AI companies maximize profit by being safe, then they’d do it without regulation, so why would we need regulation? If they don’t maximize profit by being safe, then pro-safety is not (maximally) pro-innovation.
Companies are not perfectly efficient rational actors, and innovation is not the same thing as profit, so I disagree here. For example, it is easy for companies to be caught in a race to the bottom, where each risks a major disaster that causes public backlash that destroys the industry, which would be terrible for innovation, but the expected cost to each company is outweighed by the benefit of racing. Eg Chernobyl was terrible for innovation.
Or for there to be coordination problems. Sometimes companies want or are happy with regulation but don’t want to act unilaterally because that will induce costs on just them. In the same way that a billionaire can want higher taxes without unilaterally donating to the government.
I don’t know Bores personally. I looked through some of his communications and social media, most of it seemed reasonable (I noticed his Twitter has an unusually small amount of mud-slinging). I did see one thread with some troubling comments:
Why I find this troubling:
Bores seems to want to race to build AI. Racing shortens timelines and decreases safety.
“pro-safety is the pro-innovation position” seems false? If AI companies maximize profit by being safe, then they’d do it without regulation, so why would we need regulation? If they don’t maximize profit by being safe, then pro-safety is not (maximally) pro-innovation.
I think our best hope for survival is that governments become sufficiently aware of the danger of AI that they agree to ban frontier AI development until we can figure out how to make it safe. If Bores is indeed pro-innovation on AI, then he would presumably oppose such a ban. My guess is the average Democrat would be basically fine with banning frontier AI if the political winds shifted that way, but Bores would have a more strongly-held stance, in which case he would be worse than the average Democrat (but still probably better than the average Republican).
He calls out New York directing funding to a new AI research lab as if that’s a good thing, which I don’t think it is. (I don’t actually know what EmpireAI is doing, I looked at their website but it doesn’t really say anything, it says they only fund “responsible” research, but I really don’t trust them to know what qualifies as responsible.)
Politicans are often pressured to say those sorts of things, so perhaps he would still support an AI pause if it became politically feasible. So these comments aren’t overwhelmingly troubling. But they’re troubling.
If those quotes accurately reflect his stance on AI innovation and arms races, then he might still be better than the average Democrat if the increased chance of getting weak-to-moderate AI safety regulations outweighs the decreased chance of getting strong regulations, but it’s unclear to me.
I will note that this was the only worrying comment I saw from Bores, although I didn’t find many comments on AI safety.
Companies are not perfectly efficient rational actors, and innovation is not the same thing as profit, so I disagree here. For example, it is easy for companies to be caught in a race to the bottom, where each risks a major disaster that causes public backlash that destroys the industry, which would be terrible for innovation, but the expected cost to each company is outweighed by the benefit of racing. Eg Chernobyl was terrible for innovation.
Or for there to be coordination problems. Sometimes companies want or are happy with regulation but don’t want to act unilaterally because that will induce costs on just them. In the same way that a billionaire can want higher taxes without unilaterally donating to the government.