If it weren’t for this Futurama clip (1999) I would have completely forgotten how everyone considered it a huge problem that American political parties were so similar.
Elizabeth
I think that by far the most important thing in this space is for a Democrat to win the 2028 presidential election
how can we distinguish people who:
arrive at this statement through strong evidence and reasoning
sincerely believe it, but followed shitty practices to get to it such that their opinion carries no weight
are knowingly saying this to advance some other goal
why the change?
“[thing that clearly exists] doesn’t exist” is established code for “is not the crisp category you think it is” (e.g. Why Fish Don’t Exist).
Additionally, the definition of murder might vary from place to place, but if there isn’t a crisp definition for a given jurisdiction, we say that place lacks law and order. And since treaties are definitionally between multiple parties, those parties need to share a definition of anything material, in a way they don’t need to share a definition of murder that occurs on their own soil.
I’m with you. My friend got the recipe by asking Romeo for it and we’ve made them ourselves, although it’s complicated.
Treaties Don’t Exist
When I think of international regulation, I think treaty. But treaty doesn’t have a universal definition; whether an international agreement is called a treaty, agreement, or accord is a vibes-based decisions. Beyond that, there are forms of international cooperation that are definitely not treaties. It’s not obvious what hard choices or soft vibes will be best for AI policy, so I assembled a short list of options.
For a post about international treaties, this post takes a pretty US-centric approach. Sorry about that. If anyone can speak to the internal process for international cooperation in another country, please share.
Treaties
Psych: treaty has an extremely strict definition within the US. Treaties are things ratified by a ⅔ majority in the senate using the procedure the constitution lays out for international treaties. Anything else is, from the perspective of the US, a sparkling international agreement.
From 1946-1999, 94% of international agreements the US completed were sparkling, rather than treaties. My impression is it’s even more now.
The most recently ratified treaty I (and by I I mean Claude) found was a tax treaty with Chile, ratified in 2023. It was originally signed in 2010, which might give us a hint as to why this procedure isn’t used very often.
Sparkling International Agreements
If the president doesn’t want to try for the impossible ⅔ majority required by a proper treaty, he has a few options. He can fiat declare US agreement by issuing an executive order (called an Executive Agreement), or can put the agreement to Congress and use the procedure for passing laws with a simple majority (called a Congressional-Executive Agreement). This has the same standing in international law as ratification.
NAFTA was passed by Congressional-Executive Agreement.
Interagency Networks
Federal agencies can choose to cooperate with agencies in other countries without any of the three estates giving them explicit instructions to do so. These cross-agency collaborations are generally not binding and have no enforcement mechanisms, but are aimed at problems where everyone wants to work together and benefits from doing so.
For example, the Basel Committee on Banking Supervision provides “guidelines and standards” on banking best practices but is powerless to make member banks follow them, and adherence is mixed.
Mutual recognition arrangements
Two agencies in different countries agree to voluntarily accept the other’s authority on some matter, without involving politicians. For example, the American Securities and Exchange Commission (SEC) has bilateral “memos of understanding” with many countries enabling the exchange of information.
Standards Bodies
These are non-governmental bodies that government agencies may listen to if they feel like it. E.g. standards from the International Organization for Standardization (ISO). Or a small country may judge WHO to be pretty on the ball, and choose to trust WHO’s recommendations instead of developing their own..
Informal coordination
Sometimes people do stuff together. The US and South Korea conduct joint military training exercises. Pre-Basel-Committee banking governors shared tips with each other. Bureaucrats from around the world attend conferences together and later text their new friends asking for thoughts.
Soft “law”
Law is a misnomer here because soft means “lacking even the pretense of enforcement.” These are your UN resolutions not backed by the US threatening to bomb someone, your G7 aspirational press releases, your Paris Accords that allow each country to set their own CO2 target.
Eric S. Raymond describes these as “wordcel bullshit”, and my inner engineer is quick to agree, but as I’ll say more about in a future take, I think the truth might be more complicated.
product endorsement: these masks don’t fog up my glasses and are generally easier to breathe through.
I definitely took the original post to be describing a terminal value rather than manifestation of something deeper, if you meant instrumental that handles a lot of my objections.
But while I’m here, another data point: I’ve heard multiple young women say they won’t make explicit requests in bed because what they mean is “weight this action moderately higher among your list of options” but the dude hears “keep doing this on rote until I give you another instruction”. I haven’t heard this from anyone over 30, hopefully it means someone learned something.
What are the theories of change behind SB53 and the RAISE act? What else do they need to see those theories through?
These laws both aim to regulate frontier AI, and their success has been used to argue for support for their respective political owners (Scott Weiner and Alex Bores[1]). But neither law is going to do much on its own. That’s not necessarily a problem- starting small and working your way up is a classic political strategy. But I’d like to understand if these laws are essentially symbolic and coalitional, or good on the margin (the way changing health insurance rules got real gay people health insurance that paid for real health care, which would be beneficial even if it didn’t advance gay marriage an iota), or groundwork laying (the way a law against murder is not useful without cops or prisons, but is still a necessary component).
- ^
which isn’t necessarily wrong even if the bills are themselves useless- maybe the best way to get optimal AI legislation is to support people who support any AI legislation, at least for now.
- ^
A big one would be that…
When I look at the women I know who actually ask guys out, they are consistently the ones landing especially desirable guys. For women, explicitly asking a guy out buys an absolutely enormous amount of value; it completely dwarfs any other change a typical woman can consider in terms of dating impact
...isn’t the experience of me or women I know. Asking men out leads to boyfriends who are generally passive and offload a bunch of work onto you (even when they’re BSDM tops). But I found myself not wanting to comment with this initially because I couldn’t immediately explain CNC fantasies with it.
Here are some other models that explain part of your data. None of them explain all of it, but they each explain something additional that deep nonconsent preferences don’t. Also as I typed them up I realized they explained more than I thought, but again, your initial strong frame pushed out that knowledge.
women are scared men will get angry if they go from “yes” to “no”, in a way they won’t if the woman goes from “----” to “no”, so women delay being explicit until they have all the information
“I would like to submit a formal request to touch your boob” is the lowest possible skill way to ask for consent. High skill ways are appreciated, in part because the space of boob-touching is large and you will never manage to convey sufficient detail verbally.
A partner who is good at reading you is valuable in lots of ways- perhaps valuable enough to give up a bunch of guys who would have been merely okay.
people are scared of rejection at every stage and women don’t have the same push to get over it
Gen Z men seem to have more fear/less push to get over it and indeed, AIUI aren’t asking women out very much.
this can also explain CNC fantasies- there’s no risk of owning a desire and seeing it rejected
women are disincentivized to express desire
(all models are possibilities and generalities, people are complicated, etc)
What I don’t want is people being like “this is problematic and missing important things” without actually saying a single thing that it’s wrong about or presenting any alternative model.
Reading this gave me the same feeling I used to get reading Brent’s stuff: you’re pointing at a real phenomenon but also pushing a bunch of data out of frame so it’s really hard to challenge the model. If you want counterarguments, I think you need to relax that, and especially not insist on a single explanation for every member of a set that was selected for being best explained by your favorite explanation.
Thanks. It was ambiguous who Bores meant by “they” in the Q+A but now that I’ve seen the memo I think you’re right. I had a request in to Bores office but didn’t hear back until after publication. AFAICT this is written by some guy, in which case it seems like Bores gives it too much weight.
Tl;dr: Alex Bores gave a 90 minute Q+A defending[1] the RAISE bill in the New York State Assembly. I watched/read the entire thing and wrote up some highlights below.
Context: I feel very confused about how to even evaluate politicians, given how much of the real work goes on in private and the incentives to lie. A fundraiser recommended I check out this video as something public that demonstrated Bores deep understanding of AI safety. This is still performance art- my understanding is everyone has made up their mind on how they will vote before the cameras start rolling- but he indeed seemed to be incredibly fluid, have reasonably deep models of the threat, and to be knowledgeable about the regulatory landscape across the US.
Highlights below. Unless in quote marks, statements are punched up for readability and entertainment value. For calibration or for especially surprising statements, I’ve included footnotes with full quotes verbatim from the transcript for some remarks:
Bores has a vibe of being in control and having a deep understanding of the situation, which the transcript doesn’t capture.
Bores repeatedly addressed concerns about regulatory burden by saying that
frontier AI developers’ own memos[see correction in comments] citing an opposition memo that said this bill would add 1 full time employee, and so wasn’t that burdensome.I’d be surprised if this was true and very surprised if it’s what frontier developers said even if it was true, given their incentives. I’m waiting to hear back on the memoGiven that the memo was written by some guy, I think he treats it with too much authority.
Bores repeatedly cited developers’ own cries for regulation as evidence his bill was necessary. To the extent that changed anyone’s mind, it would make those cries useful, even if the people who made them never intended to act on them.
Jacob Blumencraz: so this legislation doesn’t care if AI kills 87 people? [the bill’s threshold for caring is 100 people]
Bores: this bill is about prospective harms; harms that have already happened are covered by existing laws.
Blumencraz: but isn’t 100 people arbitrary
Bores: yes that’s how numbers in legislation work[2]Brian Maher, clearly reading questions someone else gave him: would this bill penalize running a spreadsheet to predict something based on name, given that name may be correlated with ethnicity?
Bores: on so many different levels, no[3]Daniel Norber: Doesn’t regulation increase the publics fear of a Terminator 2 scenario?[4] Can’t we rely on federal regulation by existing agencies?
Bores: “great question for whichever party is currently controlling these agencies.”Steven Otis: This bill is good and I like it
Mary Beth Walsh: surely we’d prefer a federal solution?
Bores: yup, that’d be nice
Walsh: does this bill help or hinder AI development in NYS?
Bores: help because it’s saving them from themselvesLester Chang: won’t someone please think of the fashion industry?[5]
Bores: yeah we are more concerned about the bioterrorism
Chang: “the only thing I can see that can endanger us [from AI] is scamming and stealing our secrets and money”
Bores: I’m really excited for the cybersecurity defense AI will enableMichael Novakhov: what problems is this bill trying to prevent?
Bors: Bioterrorism. Also, did you know these things will sometimes refuse to be shut down? To the point of blackmailing developers? That seems pretty bad.
- ^
Starting around 8:36, or skip to the first mention of Bores in the index
- ^
MR. BLUMENCRANZ: So, I mean, almost so much so that I—I was curious in some situations because you do include critical harm means death or serious injury of 100 or more people or at least $1 billion in damages to rights, money or property, et cetera. If a system that is—would qualify under this as a large-scale Frontier model were to commit a horrible crime like take down a plane. There were 87 passengers. They would not—that would not be considered a critical harm, and thus, they—you know, how would that affect them versus maybe a plane that went down with 110 people?
MR. BORES :So—so common law juris prudence, right, would already handle any questions on after-the-fact liability.What this is saying is trying to be specific to the most extreme cases are those the ones that developers need to plan for, or to plan against, I should say, and develop tests in order to prevent. We had to draw the line somewhere. What we are trying to make clear is that we’re really talking about the very extreme versions. We’re not saying they need to plan for very potential use of their models.MR. BLUMENCRANZ: So it’s not arbitrary, but it’s a little arbitrary.
MR. BORES: Well, any time you choose any number, right, you’re making a choice. So yes, we’ve made a choice on a specific number here, but it is meant to point to the extreme cases.
- ^
MR. MAHER: Okay. A couple things. Let’s start with machine learning—machine system analysis. So just to give you a hypothetical—and I—I know you talked about a couple of different larger businesses and only certain companies being subject to this bill. But when we talk about things like algorithms and, you know, general machine learning, I think of just regular computer software. Like, let’s say an Excel spreadsheet. So I know you have liabilities in here and penalties in here. So let’s say I’m using an Excel spreadsheet and I ask it to compute, in alphabetical order, a list of 500 companies, and by name—let’s say it’s by last name. It goes from X to Z. Well, theoretically, you could be discriminating against an Asian population or a certain type of population that has those letters that start with that alphabet. Would they have punitive liability?
MR. BORES: No. First of all, it’s—there’s—again, there’s no new PRA as part of this. Second of all, Excel wouldn’t meet the general definition of artificial intelligence to sort algorithms. It’s not something that would meet that. Third of all, it’s certainly not a Frontier model. It’s not 1026 FLOPs and spending 100 million all on its training. And fourth, there’s no bias or discrimination clauses anywhere in this bill. This is just focused—the reason it’s focused on the largest models and the reason it’s focused on the largest companies is that we are really pinpointing the extreme potentially bad outcomes from artificial intelligence development. That is all this bill is focused on. Not bias or discrimination or any of those other problems which are real problems and we should tackle, but are the subject of other legislation. - ^
MR. NORBER: Because we know since the infancy
of AI that there’s been a lot of debate about what will be the final
outcome of this. Will this be a Terminator 2 scenario or will this be
the best thing that ever happened to humanity? So I know that we
should take into concern that we don’t wanna scare anybody with this
bill. We’re saying here, now we have New York State admitting that
we need to protect ourselves from biological warfare or whatnot. So
are there other states that are agreeing to this type of regulation or
legislation? - ^
Because from my training from what—what I understand in AI and cyber, because critical infrastructure is based on information. And as is AI machine-learning itself and Frontier is the most advanced model currently right now, unless they change another—another—another label. But all I can see from large institution doing—probably not doing anything nefarious, they want to sell to consumers or to businesses to hopefully to enhance their—their profit. I mean, I can see AI into market. I can see into a fashion industry. I can see into audio-visual because they can manipulate animation. Okay? But I don’t see deaths.
AIUI, Bores and Weiner were the sole champions of their bill. Other people voted for them, obviously, but they didn’t advocate for them.
Everyone knows there are too many video games. And everyone knows we’re subject increasing social atomization and a meaning crisis. The Bottom Feeder, a longtime game dev, combines these facts: we have too many works of art because making art is a replacement for social connection and meaningful work. If we can’t have a bowling league or a job with tangible results, we can at least spend our time alone in our room making art. This rings true to me, although I think he’d wrong about the fixes.
I don’t think I understand how legislation is crafted and passed well enough to form a vision, and don’t have anyone to defer to either.
It sounds like your view is that (say) a House with 5 legislators who are amazing on AI X-risk, 15 who seem like they’re kinda pretty good, and 415 others is actively worse than one with 5 amazing legislators and 430 others?
I think it’s quite possible 1 great /15 maybes is worse than 1⁄0, depending on how you define “seem like kinda pretty good”. Or put another way, I don’t trust the ecosystem to distinguish kinda pretty good from
mildlymoderately bad. Here are some ways someone who was nominally an AI safety advocate could end up being net harmful:suck up resources better spent on other people. Money, airtime, staff...
Be offputting in a way that ends up tarring AI safety (I’m pretty worried that Scott Weiner’s woke reputation will pass on to AI safety).
Make the coordination harder. If you have 5 very smart people whose top priority is AI, you can pivot pretty quickly. If you have those 5 people, plus 15 pretty smart people who are invested enough to feel offended if not included but not enough to put in the necessary time, pivoting is much harder.
Pass mediocre or counterproductive legislation/regulation that eats up the public’s appetite for AI safety work.
I’m especially worried about regulatory capture masquerading as safety.
This is pretty sensitive to current conditions. If donors are inexhaustible, I care less about suboptimal distribution of money. Once you have a core that’s working productively (5 might be enough) you can support a second ring where the pretty good people can go without risk of them trying to steer.
On the other hand, we might want a policy of automatically supporting anyone opposing someone the pro-AI PACs support, since the counterfactual is worse.
agreed- I’d be especially interested in expansion or links on how the loss of local media affects things.
seems like a lost opportunity to have this here and not on Eric’s actual post.
Can you say more about what being agentic would have looked like?