Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
Frontier AI will change the nature of work, shape international power, and potentially change what it means to be human and Australia doesn’t have a seat at the table; fix this
Create jobs and opportunities for Australians to build and contribute to safe and secure AI in Australia, rather than all going overseas
Overly focus on algorithmic bias, privacy violations, or misinformation/deepfakes to the exclusion of systemic or catastrophic risk from AI
Overly focus on non-frontier AI technologies like recommender systems or AI for drug discovery instead of frontier AI systems like ChatGPT, Claude, Gemini, Grok, Llama, DeepSeek, Kimi
Focus on enforcing or extending copyright laws that prevent AI investment in Australia
Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
What. What is “safe frontier AI training”? We clearly have no way of doing this.
Please do not go to Australia and get them to make building datacenters cheaper.
All the other things do seem useful, but this is the first item on the list and seems really bad!
My sense of priorities for an Australian AISI.
Priorities
Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
Frontier AI will change the nature of work, shape international power, and potentially change what it means to be human and Australia doesn’t have a seat at the table; fix this
Create jobs and opportunities for Australians to build and contribute to safe and secure AI in Australia, rather than all going overseas
Set clear national guidelines for AI safety, security, and transparency
Set national AI standards similar to SB 53, RAISE Act, EU AI Act
Create a “regulatory market” for AI and empower insurers, standard-setters, and auditors
Allow beneficial AI development by removing regulatory uncertainty
Advance Australian interests in AI safety and security internationally
Contribute to frontier AI evaluations and safety research like UK AISI, US CAISI
Leverage Australia’s role as a “middle power” to create beneficial AI outcomes
Share resources with the international network of AI safety institutes
Not priorities
Overly focus on algorithmic bias, privacy violations, or misinformation/deepfakes to the exclusion of systemic or catastrophic risk from AI
Overly focus on non-frontier AI technologies like recommender systems or AI for drug discovery instead of frontier AI systems like ChatGPT, Claude, Gemini, Grok, Llama, DeepSeek, Kimi
Focus on enforcing or extending copyright laws that prevent AI investment in Australia
What. What is “safe frontier AI training”? We clearly have no way of doing this.
Please do not go to Australia and get them to make building datacenters cheaper.
All the other things do seem useful, but this is the first item on the list and seems really bad!