🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!

At around 9 AM, on June 25, at a committee hearing titled ā€œAuthoritarians and Algorithms: Why U.S. AI Must Leadā€ (at the 11-minute, 50-second mark in the video), Congressman Raja Kirshnamoorthi, a democrat of Illinois’s 8th congressional district, Announced to the committee room ā€œI’m working on a new bill [he hasn’t introduced it yet], ā€˜the AGI Safety Act’ that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.ā€

The hearing continued with substantive discussion from congress members of both parties(!) of AI safety and the need for policy to prevent misaligned AI.

This is a rare, high-leverage juncture: a member of Congress is actively writing a bill that could (potentially) fully stop the risk of unaligned AGI from US labs. If successful, in just a few months, you might not have to worry about the alignment problem as much, and we can help him with this bill.

Namely, after way too long, I (and others) finally finished up on a full thing explaining the AGI Safety Act:

& here’s the explanation of the 8 ways folks can be a part of it!

  1. Mail

2. Talking to Congress

3. Mail, but to over a thousand congress folk, and in only 5 minutes


4. Talking to congress, part 2: How to literally meet with congress folk and talk to them literally in person
5. Come up with ideas that might be put in the official AGI safety act
6. Getting AI labs to, by law, be required to test if their AI is risky & tell everyone if it turns out to be risky
7. And most importantly, Parallel Projects!

I was talking to a friend of mine on animal welfare, and I thought, ā€œOh! ya know, I’m getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!ā€, so if y’all have any ideas for such a bill, do all of the above stuff, just replace ā€œAI risksā€ with ā€œAnimal welfareā€, or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on

https://​​app.gather.town/​​app/​​Yhi4XYj0zFNWuUNv/​​EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there’s a good chance I’ll tell ya how!