We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.
[...]
We believe the central aim of an international AI treaty should be to prevent the unchecked escalation of the capabilities of AI systems while preserving their benefits. For such a treaty, we suggest the following core components:
Global Compute Thresholds: Internationally upheld thresholds on the amount of compute used to train any given AI model, with a procedure to lower these over time to account for algorithmic improvements.
CERN for AI Safety: A collaborative AI safety laboratory akin to CERN for pooling resources, expertise, and knowledge in the service of AI safety, and acting as a cooperative platform for safe AI development and safety research.
Safe APIs: Enable access to the APIs of safe AI models, with their capabilities held within estimated safe limits, in order to reduce incentives towards a dangerous race in AI development.
Compliance Commission: An international commission responsible for monitoring treaty compliance.
Full letter at https://aitreaty.org/.
I assume that “threshold” here means a cap/maximum, right? So that nobody can create AIs larger than that cap?
Or is there another possible meaning here?
That is my interpretation, yes.