I suppose it’s better to at least know you need a plan and think to build a bunker, even if you don’t realize that the bunker will do you absolutely no good against the AGI itself, versus not even realizing you need a plan. And the bunker does potentially help against some other threats, especially in a brief early window?
I think Ilya realizes very clearly that the bunker is not against the AGI itself, but only against the turmoil of the “transition period”. He seems to be quite explicit in the quoted article ‘We’re Definitely Going to Build a Bunker Before We Release AGI’: The true story behind the chaos at OpenAI (emphasis mine):
“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”
Yes, hopefully the authors will fix it in the post.
Meanwhile, the correct link seems to be https://www.lesswrong.com/posts/nuDJNyG5XLQjtvaeg/is-alignment-reducible-to-becoming-more-coherent