we can’t safety build a superintelligence, and if we do, we will not remain in control.
When I speak of losing control, I don’t just mean losing control over the AI. I also mean losing any real control over our future. The future of the human race may be decided at a meeting that we do not organize, that we do not control, and that we do not necessarily get to speak at.
I, do, however, agree that futures where someone remains in control of the superintelligence also look worrisome to me, because we haven’t solved alignment of powerful humans in any lasting way despite 10,000 years of trying.
Thank you! Let me clarify my phrasing.
When I speak of losing control, I don’t just mean losing control over the AI. I also mean losing any real control over our future. The future of the human race may be decided at a meeting that we do not organize, that we do not control, and that we do not necessarily get to speak at.
I, do, however, agree that futures where someone remains in control of the superintelligence also look worrisome to me, because we haven’t solved alignment of powerful humans in any lasting way despite 10,000 years of trying.