The AI Countdown Clock

Link post

I made this clock, counting down the time left until we build AGI:

It uses the most famous Metaculus prediction on the topic, inspired by several recent dives in the expected date. Updates are automatic, so it reflects the constant fluctuations in collective opinion.

Currently, it’s sitting in 2028, i.e. the end of the next presidential term. The year of the LA Olympics. Not so far away.

There were a few motivations behind this project:

  1. Civilizational preparedness. Many people are working on making sure this transition is a good one. Many more probably should be. I don’t want to be alarmist, but the less abstract we can make the question, the better. In this regard, it’s similar to the Doomsday Clock.

  2. Personal logistics. I frequently find myself making decisions about long-term projects that would be deeply affected by the advent of AGI. Having kids, for example.

    The prediction is obviously far from absolute, and I’m not about to stop saving more than 5 years and 11 months of living expenses. But it’s good to be reminded that the status quo is no longer the best model for the future.

  3. Savoring the remainder. Most likely, AGI will be the beginning of the end for humanity. Not to say that we will necessarily go extinct, but we will almost definitely stop being “human,” in the recognizable/​traditional sense.

    For many years, I’ve used the Last Sunday as my new tab page. It shows you how many Sundays you have left in your life, if you live to an average age. I’ve gotten some strange looks, when it accidentally pops up during a presentation. I know it seems morbid, like a fixation on the end. But I don’t see it that way; it’s not about the end, but the finitude of the middle. That precious scarcity.

    I’ve spent a lot of time thinking about the meaning of being human, but this mostly dissolves that angst. It’s like: if I live in San Francisco, my daily decisions about what to do here are impossibly broad. But if I’m a tourist, visiting for a few days, then my priority is to do all of the most “San Francisco” things I can: see the Golden Gate, eat a burrito in the Mission, sit among purple-red succulents on a foggy beach cliff.

    So, if I only have X years left of being human, I should focus on doing the most human things I can. Whatever that means to me. Conveniently, this applies to the worlds with and without AGI, since in the latter, I’ll still die. But the shorter timeline makes it more real.

Let me know if you have any feedback or suggestions.