š«µYOU𫵠get to help the AGI Safety Act in Congress! This is real!
At around 9 AM, on June 25, at a committee hearing titled āAuthoritarians and Algorithms: Why U.S. AI Must Leadā (at the 11-minute, 50-second mark in the video), Congressman Raja Kirshnamoorthi, a democrat of Illinoisās 8th congressional district, Announced to the committee room āIām working on a new bill [he hasnāt introduced it yet], āthe AGI Safety Actā that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.ā
The hearing continued with substantive discussion from congress members of both parties(!) of AI safety and the need for policy to prevent misaligned AI.
This is a rare, high-leverage juncture: a member of Congress is actively writing a bill that could (potentially) fully stop the risk of unaligned AGI from US labs. If successful, in just a few months, you might not have to worry about the alignment problem as much, and we can help him with this bill.
Namely, after way too long, I (and others) finally finished up on a full thing explaining the AGI Safety Act:
& hereās the explanation of the 8 ways folks can be a part of it!
Mail
2. Talking to Congress
3. Mail, but to over a thousand congress folk, and in only 5 minutes
4. Talking to congress, part 2: How to literally meet with congress folk and talk to them literally in person
5. Come up with ideas that might be put in the official AGI safety act
6. Getting AI labs to, by law, be required to test if their AI is risky & tell everyone if it turns out to be risky
7. And most importantly, Parallel Projects!
I was talking to a friend of mine on animal welfare, and I thought, āOh! ya know, Iām getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!ā, so if yāall have any ideas for such a bill, do all of the above stuff, just replace āAI risksā with āAnimal welfareā, or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on
https://āāapp.gather.town/āāapp/āāYhi4XYj0zFNWuUNv/āāEA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and thereās a good chance Iāll tell ya how!
This seems obviously false to me. Just because we have a law in place to restrict the behavior of frontier labs doesnāt mean we get to stop worrying about alignment. It instead means that we stop having to worry quite so much that AI labs that fall under US jurisdiction will keep pressing forward in maximally dangerous ways, assuming there are good enforcement mechanisms, the bill doesnāt get watered down, China doesnāt take the lead and produce more dangerous models first, etc.
Iām not saying that such a law isnāt good in theory (I have no idea if it would be actually good because we donāt yet have the text of the bill), but just that this is a bit more excitment than I think is warranted if there were such a law.
Yeah, agreed, but either way, itās a motivating sentence, so I reckon itās good topretendits true(except for when doing so would lead you to make a misinformed, worse decision. Then,knowthat itās only maybe true if we try hard/āsmart enough)good point, I changed it!
It would, if wildly more successful than any law in human history has ever been, stop a very small fraction of the risk.
Thanks for this! Are you able to offer a text summary for folks who are busy and donāt want to watch a bunch of videos?
Also suggest posting to EA Forum if you havenāt already.
Hmm, yeah I guess I could.
The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesnātāplease tell me if thereās a way I can make this article explain ā what the AGI safety act is and the stuff we can do about itā
the 3 videos after that I could make a written version, but Iād guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.
the 3 videos after that are conveniently not videos, theyāre written google docs, which is convenient
the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, Iād just copy/āpaste
So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (Iād probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/āhave text summaries.
(Sorry if this sounds like Iām rambling. Iām sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)