Seems like the most frank official communication of any AGI lab to date on AGI extinction risk. Some quotes:
But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
While superintelligence seems far off now, we believe it could arrive this decade.
[...]
Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.
[...]
Our goal is to solve the core technical challenges of superintelligence alignment in four years.
Seems like the most frank official communication of any AGI lab to date on AGI extinction risk. Some quotes: