Since this got nominated, now’s a good time to jump in and note that I wish that I had chosen different terminology for this post.
I was intending for “final crunch time” to be a riff on Eliezer saying, here, that we are currently in crunch time.
This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us. This is the hour before the final exam and we’re trying to get as much studying done as possible.
I said explicitly, in this post, “I’m going to refer to this last stretch of a few months to a few years, ‘final crunch time’, as distinct from just ‘crunch time’, ie this century.”
But predictably, in retrospect, that one sentence didn’t stick with people when recalling the post later, and the “final” in “final crunch time” gets dropped.
I would have preferred to preserve the resonance of Eliezer’s original point, that right now is crunch time, when we’re trying to prepare as best we can for our pass-fail test as a species, and I think that point gets eaten by my choice of terminology.
I’m inclined to go back and rewrite this post, as “How do we prepare for the AI endgame?”
(That terminology is better at not clashing with Eliezer’s original point, though I also think that it is somewhat less evocative of the right thing. “Endgame” gives the impression of “the final, last steps of a grand strategy, cooperating and competing with other strategic actors on the gameboard”. “Crunch time” gives the impression of “frantcially trying to make sense of what’s happening in the hope of hacking together a passable solution.” Which I think is closer to the spirit of what we should be preparing for.)
If I do that rewrite, it seems like doing it before this essay gets packaged into a book seems ideal.
But maybe it’s too late, and I changing terminology after people have been exposed to it is counterproductive?
I think that if we were in crunch time in 2010, your phrasing is fine, because we’re in final crunch time now. If you have an alarm, please ring it. Though, also make sure to mention that coprotective safety is looking tractable and likely to succeed if we try! despite the drawbacks, the diplomacy ai gave me a lot of hope that we can solve the hard cooperation problem.
Since this got nominated, now’s a good time to jump in and note that I wish that I had chosen different terminology for this post.
I was intending for “final crunch time” to be a riff on Eliezer saying, here, that we are currently in crunch time.
I said explicitly, in this post, “I’m going to refer to this last stretch of a few months to a few years, ‘final crunch time’, as distinct from just ‘crunch time’, ie this century.”
But predictably, in retrospect, that one sentence didn’t stick with people when recalling the post later, and the “final” in “final crunch time” gets dropped.
I would have preferred to preserve the resonance of Eliezer’s original point, that right now is crunch time, when we’re trying to prepare as best we can for our pass-fail test as a species, and I think that point gets eaten by my choice of terminology.
I’m inclined to go back and rewrite this post, as “How do we prepare for the AI endgame?”
(That terminology is better at not clashing with Eliezer’s original point, though I also think that it is somewhat less evocative of the right thing. “Endgame” gives the impression of “the final, last steps of a grand strategy, cooperating and competing with other strategic actors on the gameboard”. “Crunch time” gives the impression of “frantcially trying to make sense of what’s happening in the hope of hacking together a passable solution.” Which I think is closer to the spirit of what we should be preparing for.)
If I do that rewrite, it seems like doing it before this essay gets packaged into a book seems ideal.
But maybe it’s too late, and I changing terminology after people have been exposed to it is counterproductive?
Do people have thoughts?
I think that if we were in crunch time in 2010, your phrasing is fine, because we’re in final crunch time now. If you have an alarm, please ring it. Though, also make sure to mention that coprotective safety is looking tractable and likely to succeed if we try! despite the drawbacks, the diplomacy ai gave me a lot of hope that we can solve the hard cooperation problem.