The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.