Q. Why am I doing this?
Superintelligent AI might kill every person on Earth by 2030 (unless people coordinate to pause AI research). I want public at large to understand the gravity of the situation.
Curiosity. I have no idea how my body reacts to not eating for many days.
Q. How long is the fast?
Minimum 7 days. I will not be fasting to death. I am consuming water and electrolytes. No food. I might consider fasting to death if ASI was less than a year away with above 50% probability. Thankfully we are not there yet.
Q. How to learn more about this?
Watch my YouTube channel, watch Yudkowsky’s recent interviews. Send me an email or DM (email preferred) if you’d like to ask me questions personally, or be included on the livestream yourself.
You have presented a stance but not any actual arguments behind this stance.
Since you’re an AI researcher, I’m sure we can have a technical conversation. Why do you think model scaling will fail and why do you think RL scaling will fail?
If I completely ignore the hype, I don’t know of an argument that convinces me AGI is very unlikely before 2030. The fact that there’s hype and strong incentives for the hype isn’t evidence that the balance of the non-hype technical knowledge about LLMs says any particular thing. This is not a valid argument, even if the conclusion is correct. The whole (a)-(g) sequence of events is even more tenuous.
Also, I’m not sure why timelines should matter (outside of long sequences of fortunate events), since AGI will be arriving at some point in any case, and it’s more robust for a Pause to occur earlier (a global treaty that treats AI datacenters as uranium enrichment plants, only worse because the blast radius is the whole world). It’s more difficult and inconvenient for everyone to make sure AGI isn’t created unilaterally if a Pause starts closer to AGI in available hardware and results of computation-heavy experiments.
What if you’re wrong?