I think the number of people who would come to the march in their city on the weekend/evening of the work day is significantly higher than the number of people who would travel for it cross-country.
I think 100k march would be a sign for policy makers and get to the news anyway, whether it in NYC, SF, Washington or in the middle of the desert.
Also, IMO, it would make more sense to start with a lower threshold − 10k for example.
Valentin2026
I wish there were some discussion about the location. Why Washington DC? Aren’t most AI Safety aligned people in the Bay Area and London (and a little bit in continental Europe)? If we want to get 100000 people ASAP I feel any of the following locations would be better than Washington, DC:
Bay Area—already a huge base of people to come, I bet you would get a few thousands registrations immediately
NYC—just a lot of people who may have read IABIED and get the message
London/European capital—in current situation, it is easier for American citizens to travel to Europe than the opposite
I see that in the very first post in this series, you write that you think that more children is good and then all the posts will be about how, not why. It would be nice if you give here a link to the very first post, and maybe briefly mention it as well.
I think more children as an abstract concept (spherical cow) is a good thing. But in the universe where the choice is between “more children” and “more immigration”, it seems that the latter might be a better way. Of course, I am biased, since I am myself an immigrant. But it seems that immediately getting an educated young adult without spending money growing up the child, and maybe spending a little bit on their integration, can be a better choice. It’s better for society. It is better for this immigrant.
In a distant future (if without AGI), when the Earth population will start to decrease, we will need measures to support fertility. Now, I think, we need measures to support immigration.
That one is interesting! Where do you take probabilities from?
Could you do it as a text post with short explanations of each?
I think yes, it would help to avoid confusion.
Have a sufficient financial safety net
I think, this condition is important only if I am going to leave my full-time job and switch to unpaid AI Safety projects. For some people (who have financial security), this may be the case. Many, including myself, do not have this security. It does not mean I can’t do any projects until I get enough funds to survive. Rather, it means that I can do only part-time projects (for me, it was organising mentoring programs and leading AI Safety Camp project). Meanwhile, I still think applying to the roles that seem to be a good fit for me makes quite a lot of sense—I would rather spend 40 hours/week working on AI Safety than on a regular job. Maybe it should be something like 80% projects, 20% applying (the numbers are random).
I feel that the percentage of people who can afford not to have paid work and only do AI Safety projects till AGI arrives is not that high. It would be nice to have also a strategy and recommendations, what a person can do for AI Safety with 10 hours/week, or 5, or even 1. I think the boundary where one can do something useful is quite low—even with 5 minutes/week they can e.g. repost stuff in social networks.
The idea of total extinction might seem wild for non-nerds. Maybe it is good to start with small things:
-the job you are doing will be done by AI
-whatever education you or your kids get in college, it won’t give you a job
-even if you are working on AI with AI you still can be replaced completely by AI
This may at least make them to think more about impact on society and importance of the problem on the gut level, and from there we could go to more serious issuesI would say the nuclear war would be the least sci-fi scenario. Arm race leads to using AI everywhere to beat the opponent, including systems responsible for observing and responding to opponent’s missile strikes, and then it goes rogue.
I think bioweapons can be persuasive. We know that there are viruses like smallpox with very high lethality and very big virulence. Actually COVID and smallpox would be a good starting point for explanation. I would say something like: “Remember COVID spread everywhere despite all restrictions? Its incubation period was roughly 2 weeks. For smallpox, it can be like 40 days. It is incredibly viral. In a modern world, in 40 days it will be everywhere, and then it is too late. Lethality rate 50-80%. And for smallpox we have a vaccine, that is why we are safe against a smallpox. AI already designs viruses and can easily be used to design something like smallpox, but for which this vaccine does not work”.
Thank you very much for catching the mistake! I checked, you are completely right.
I don’t think they passed it in a full sense. Before LLM, there was a 5 minute Turing test, and some chatbots were passing it. I think 5 minutes is not enough. I bet that if you give me 10 hours, any currently existing LLM and human, we will communicate only via text, I will be able to figure out who is who (if both will try hard to persuade in their humanity). I don’t think LLM can come up yet with a consistent non-contradicting life story. It would be an interesting experiment :)
Would you mean similarity on the outer level (e.g. Turing test) or at inner (e.g. neural network structure should resemble brain structure?
If the first—would it mean that when AI passes Turing test it would be sentient?
If the second—what are the criteria for similarity? Full brain emulation or something less complicated?
Are you working with SOTA model? Here, mathematicians report a quite different story https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/
I guess “good at” was improper wording. I did not mean that they do not produce nonsense. I meant that sometimes they can produce a correct solution. It is like the person may be not fit for running 100 meters in 10 seconds every day, but even if they do it in 5% of cases this is already impressive, and shows that it is possible in principle. And I guess “Ph.D. level” sounded like they can write a Ph.D. thesis from scratch. I just meant that there are short well-formulated problems that would require Ph.D. student a few hours, if not few days, which current LLM can solve in non negligible number of cases.
Can you expand your argument why LLM will not reach AGI? Like, what exactly is the fundamental obstacle they will never pass? So far they successfully doing longer and longer (for humans ) tasks https://benjamintodd.substack.com/p/the-most-important-graph-in-ai-right
I neither can see why in a few generations LLM won’t be able to run a company, as you suggested. Moreover, I don’t see why it is necessary to get to AGI. LLM are already good at solving complicated, Ph.D. level mathematical problems, which improves. Essentially, we just need an LLM version of AI researcher. To create ASI you don’t need a billion of Sam Altmans, you need a billion of Ilya Sutskevers. Is there any reason to assume LLM will never be able to become an excellent AI researcher?
I agree, they have a really bad life, but Eliezer seems to talk here about those who work 60 hours/week to ensure their kids will go to a good school. Slightly different problem.
And on homeless people, there are different cases. In some UBI indeed will help. But, unfortunately, in many cases the person has mental health problems or addiction, and simply giving them money may not help.
I feel that one of the key elements of the problem is misplaced anxiety. If the ancient farmer stops working hard he will not not get enough food. So all his family will be dead. In modern Western society, the risk of being dead from not working is nearly zero. (You are way more likely to die from exhausting yourself and working too hard). When someone works too hard, usually it is not fear of dying too earlier, or that kids will die. It is a fear of failure, being the underdog, not doing what you are supposed to, and plenty of other constructs that ancient people simply did not reach—first they needed to survive. In this sense, we are way better than even one hundred years ago.
Can UBI eliminate this fear? Maybe partially it can help, but people will still likely work hard to preserve their future and the future of their children. Maybe making psychotherapy (t address the fear itself) more available for those with low income is a better solution. I understand that it would require training way more specialists than we have now. However, some people report a benefit from talking with GPT as a therapist https://x.com/Kat__Woods/status/1644021980948201473 , maybe it can help.
What is the application deadline? I did not find it in the post. Thank you!
Yes, absolutely! We will open the application for mentee later
So far nothing, was distracted by other stuff in my life. Yes, let’s chat! frombranestobrains@gmail.com
After the rest of the USA is destroyed the very unstable situation (especially taking into account how many people have guns) is quite likely. In my opinion countries (and remote parts of countries) that will not be under attack at all are much better
I completely disagree. It will mobilize supporters, get to the news, and attract attention. The next march may attract 15k, the march after that 20k etc.
Example: during the protests in Moscow, Russia in 2011 after the electoral frauds, the first big rally gathered 50k-100k. The second gathered 100k-200k, since people saw—it is totally fine to come to such rallies.
To put in other words: if your goal is 100k march in Washington, DC, I think an intermediate 10k march in SF would increase the chances to achieve this goal.