We invent a way for AGIs to learn faster than humans : Why is this even in the table? This would be 1.0 because it’s a known fact, AGI learns faster than humans. Again, from the llama training run, the model went from knowing nothing to domain human level in 1 month. That’s faster. (requiring far more data than humans isn’t an issue)
100% feels overconfident. Some algorithms learning some things faster than humans is not proof that AGI will learn all things faster than humans. Just look at self-driving. It’s taking AI far longer than human teenagers to learn.
AGI inference costs drop below $25/hr (per human equivalent): Well, A100s are 0.87 per hour. A transformative AGI might use 32 A100s. $27.84 an hour. Looks like we’re at 1.0 on this one also.
100% feels overconfident. We don’t know if transformative will need 32 A100s, or more. Our essay explains why we think it’s more. Even if you disagree with us, I struggle to see how you can be 100% sure.
Oh, to clarify, we’re not predicting AGI will be achieved by brain simulation. We’re using the human brain as a starting point for guessing how much compute AGI will need, and then applying a giant confidence interval (to account for cases where AGI is way more efficient, as well as way less efficient). It’s the most uncertain part of our analysis and we’re open to updating.
For posterity, by 2030, I predict we will not have:
AI drivers that work in any country
AI swim instructors
AI that can do all of my current job at OpenAI in 2023
AI that can get into a 2017 Toyota Prius and drive it
AI that cleans my home (e.g., laundry, dishwashing, vacuuming, and/or wiping)
AI retail workers
AI managers
AI CEOs running their own companies
Self-replicating AIs running around the internet acquiring resources
Here are some of my predictions from the past:
Predictions about the year 2050, written 7ish years ago: https://www.tedsanders.com/predictions-about-the-year-2050/
Predictions on self-driving from 5 years ago: https://www.tedsanders.com/on-self-driving-cars/