Regarding AI research programs that are closed and are pushing capabilities ahead as fast as possible, I see two reasons for hope.
First: if they are smart enough to genuinely have a chance of creating superhuman AI, hopefully they are also smart enough to understand that superhuman AI could have its own agenda, and has the potential to be a threat from within, rivaling any external threats that may be motivating the researchers.
Second: as AI advances, AI itself can contribute to alignment theory. The technology itself therefore has some possibility of improving the strategic wisdom of any group trying to develop it.
Regarding AI research programs that are closed and are pushing capabilities ahead as fast as possible, I see two reasons for hope.
First: if they are smart enough to genuinely have a chance of creating superhuman AI, hopefully they are also smart enough to understand that superhuman AI could have its own agenda, and has the potential to be a threat from within, rivaling any external threats that may be motivating the researchers.
Second: as AI advances, AI itself can contribute to alignment theory. The technology itself therefore has some possibility of improving the strategic wisdom of any group trying to develop it.