Australia seems like a prime location for datacenter build-out. OpenAI published an “AI blueprint” for Australia, calling for datacenter build-out, and started building a $4.6B datacenter in Sydney in Dec 2025. Australia is a NATO partner, Five Eyes member, and member of the AUKUS security partnership with the US and UK; it’s much more secure and aligned with US/UK interests than Saudi Arabia. Australia is the second-largest exporter of thermal coal, has vast solar and wind resources, and the highest uranium reserves on earth. Australia is currently quite anti-nuclear at the moment, but it has no earthquakes or tsunamis to disrupt power plants. Janet Egan (CNAS) recently called for the development of US military AI projects in Australia, similar to the Pine Gap facility in the Northern Territory. AI safety & security research and political pressure for safety standards should focus on countries with frontier AI companies and datacenters.
Several prominent AI safety researchers have come from Australia, including Marcus Hutter, Buck Shlegeris, Jan Leike (PhD only), Ramana Kumar, Dan Murfet, and Daniel Filan. A decent number of MATS fellows come from Australia. Australian citizens can easily emigrate to the US (E-3 visa) or UK (YMS visa) for work. Seven of the top-100 computer science universities are in Australia.
Australia has a history as a middle power between the US and China, two of its largest trading partners. The country may play a significant role in international diplomacy efforts, such as a superintelligence ban or international AGI project.
Australia has a strong history of environmental activism, which might (or might not) be a useful asset for AI safety political mobilization.
Piggybacking: If people have strong opinions on, or interest in contributing to, said fieldbuilding, I’d be very happy to connect—you can reach me at michael.kerrison@aisafetyanz.com.au :)
I am a volunteer with PauseAI Australia, so if anyone wants to connect with our very, very small group, that would be great. We are pushing politicians on superintelligence.
Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
Frontier AI will change the nature of work, shape international power, and potentially change what it means to be human and Australia doesn’t have a seat at the table; fix this
Create jobs and opportunities for Australians to build and contribute to safe and secure AI in Australia, rather than all going overseas
Overly focus on algorithmic bias, privacy violations, or misinformation/deepfakes to the exclusion of systemic or catastrophic risk from AI
Overly focus on non-frontier AI technologies like recommender systems or AI for drug discovery instead of frontier AI systems like ChatGPT, Claude, Gemini, Grok, Llama, DeepSeek, Kimi
Focus on enforcing or extending copyright laws that prevent AI investment in Australia
Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
What. What is “safe frontier AI training”? We clearly have no way of doing this.
Please do not go to Australia and get them to make building datacenters cheaper.
All the other things do seem useful, but this is the first item on the list and seems really bad!
I’m not sure that I follow why ai safety work should be colocated with data centre build outs. I don’t think many ai safety researchers would have much of anything to do with data centre infrastructure, and as far as I can tell they may aswell be located on the moon.
Very positive on a diversification of AI safety opportunities though.
One reason is that hosting data centers can give countries political influence over AI development, increasing the importance of their governments having reasonable views on AI risks.
AI safety researchers tend to have reasonable views on AI safety and can serve as local advisors to governments, which probably trust foreign experts less.
Securing datacenters for SL5 to prevent proliferation of AGI is really hard and likely requires significant AI security expertise in government and local defence contractors.
A regulatory market approach to AI safety (possibly only useful pre-superintelligence) requires competent local auditors, standard-setters, and insurers.
That seems a stronger argument for AI safety policy experts (such as the ones that the aps is beginning to hire) as opposed to safety researchers.
Maybe there’s an argument that policy experts might chat with researchers at local cafes or meetups e.t.c., but it’s quite second order and it seems like a relatively small benefit compared to the wealth of human capital you’d get opening a safety lab somewhere like India.
Yoshua Bengio, Paul Christiano, Geoffrey Irving seem more like technical AI safety experts than AI policy experts, but they arguably have strong influence on governments.
I suspect that some LWers would interpret this as a (bad) argument for countries to build datacenters so they can exercise political control over AGI. I don’t think this works.
Office hubs: Expand SASS, which is in close proximity to the new OpenAI and Anthropic offices in Sydney. Start an AI safety hub in Canberra to support the new AISI. Successful AI safety hubs have benefited from prominent founding member organizations like ARC, CG, Redwood, and MIRI (for Constellation) and Apollo, BlueDot, and MATS (for LISA). Similarly, SASS should bring together orgs like the Gradient Institute and Harmony Intelligence in a shared space, and the new Canberra hub should be built around Good Ancestors. Office hubs can benefit member orgs by providing cheaper returns to scale, hosting shared networking events, facilitating collaboration, and providing a pipeline of strong new recruits in the form of office guests and members.
Training programs: Expand TARA and the Sydney AI Safety Fellowship program, focusing on accelerating top talent and building local mentorship capacity for future programs. Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Academic labs: Build relationships with AI/CS academics at UniMelb, Monash, USyd, ANU, UTS, UNSW, UA, UQ, etc. Help launch AI safety courses like Roy Rinberg and Boaz Barak did at Harvard. Other course inspiration is provided by Stanford and CAIS. Start AI safety academic labs like UC Berkeley CHAI, MIT AAG, NYU ARG, Bau Lab, Stanford HAI, CMU FOCAL, etc.
Conferences: Run an annual AI safety conference like the Australian AI Safety Forum 2024, bringing together academia, industry, government, and nonprofit field-builders. EAGx is probably not enough, as many people from academia, industry, and government likely won’t attend.
Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Could you clarify? Do you mean that if you have the chance to support someone new who would gain a lot since they haven’t participated in many AI safety programs or the chance to support someone more advanced, you’d suggest picking the later? With the reasoning being that the former might look like a better bet because of more room to make a difference, however boosting the latter increases the supply of mentors and therefore actually ends up benefiting beginners as least as much.
Yes, I would generally support picking the latter as they have a “faster time to mentorship/research leadership/impact” and the field seems currently bottlenecked on mentorship and research leads, not marginal engineers (though individual research leads might feel bottlenecked on marginal engineers).
We should prioritize people who already have research or engineering experience or a very high iteration speed as we are operating under time constraints; AGI is coming soon. Additionally, I think “research taste” will be more important than engineering ability given AI automation and this takes a long time to build; better to select people with existing research experience they can adapt from another field (also promotes interdisciplinary knowledge transfer).
The Perth Machine Learning Group sometimes hosts AI Safety talks or debates. The most recent one had 30 people attend at the Microsoft Office with a wide range of opinions. If anyone is passing through and is interested in meeting up or giving a talk, you can contact me.
There are a decent amount of technical machine learning people in Perth, mainly coming from mining and related industries (Perth is somewhat like the Houston of Australia).
I am interested. I’ve already been talking to several of the people involved, but I’m Melbourne based so I have been limited in my ability to interact.
AI safety field-building in Australia should accelerate. My rationale:
OpenAI opened a Sydney office in Dec 2025 and Anthropic is planning to open a Sydney office in 2026. These offices may hire safety staff from local talent, or partner with local auditing, evaluation, and security companies, including Harmony Intelligence, Good Ancestors, and Gradient Institute.
An Australian AISI was announced for early 2026 and is currently hiring. The UK AISI has benefited from close partnerships with Apollo Research, METR, and the LISA office community. There is a community space in Sydney, the Sydney AI Safety Space, and two field-building organizations, AI Safety ANZ and TARA, but these could expand substantially.
Australia seems like a prime location for datacenter build-out. OpenAI published an “AI blueprint” for Australia, calling for datacenter build-out, and started building a $4.6B datacenter in Sydney in Dec 2025. Australia is a NATO partner, Five Eyes member, and member of the AUKUS security partnership with the US and UK; it’s much more secure and aligned with US/UK interests than Saudi Arabia. Australia is the second-largest exporter of thermal coal, has vast solar and wind resources, and the highest uranium reserves on earth. Australia is currently quite anti-nuclear at the moment, but it has no earthquakes or tsunamis to disrupt power plants. Janet Egan (CNAS) recently called for the development of US military AI projects in Australia, similar to the Pine Gap facility in the Northern Territory. AI safety & security research and political pressure for safety standards should focus on countries with frontier AI companies and datacenters.
Several prominent AI safety researchers have come from Australia, including Marcus Hutter, Buck Shlegeris, Jan Leike (PhD only), Ramana Kumar, Dan Murfet, and Daniel Filan. A decent number of MATS fellows come from Australia. Australian citizens can easily emigrate to the US (E-3 visa) or UK (YMS visa) for work. Seven of the top-100 computer science universities are in Australia.
Australia has a history as a middle power between the US and China, two of its largest trading partners. The country may play a significant role in international diplomacy efforts, such as a superintelligence ban or international AGI project.
Australia has a strong history of environmental activism, which might (or might not) be a useful asset for AI safety political mobilization.
Piggybacking: If people have strong opinions on, or interest in contributing to, said fieldbuilding, I’d be very happy to connect—you can reach me at michael.kerrison@aisafetyanz.com.au :)
Also piggybacking, if anybody is Sydney-based or visiting Sydney, you are welcome to work out of the SydneyAISafetySpace.org (SASS) for free.
We’re not free at the Melbourne AI Safety Hub, but we are all terribly charming.
I am a volunteer with PauseAI Australia, so if anyone wants to connect with our very, very small group, that would be great. We are pushing politicians on superintelligence.
My sense of priorities for an Australian AISI.
Priorities
Enable safe and secure frontier AI training and deployment in Australia
Build pathways around copyright restriction that prevents frontier AI companies from building training datacenters in Australia; the OpenAI datacenter being built in Sydney is for inference only
Frontier AI will change the nature of work, shape international power, and potentially change what it means to be human and Australia doesn’t have a seat at the table; fix this
Create jobs and opportunities for Australians to build and contribute to safe and secure AI in Australia, rather than all going overseas
Set clear national guidelines for AI safety, security, and transparency
Set national AI standards similar to SB 53, RAISE Act, EU AI Act
Create a “regulatory market” for AI and empower insurers, standard-setters, and auditors
Allow beneficial AI development by removing regulatory uncertainty
Advance Australian interests in AI safety and security internationally
Contribute to frontier AI evaluations and safety research like UK AISI, US CAISI
Leverage Australia’s role as a “middle power” to create beneficial AI outcomes
Share resources with the international network of AI safety institutes
Not priorities
Overly focus on algorithmic bias, privacy violations, or misinformation/deepfakes to the exclusion of systemic or catastrophic risk from AI
Overly focus on non-frontier AI technologies like recommender systems or AI for drug discovery instead of frontier AI systems like ChatGPT, Claude, Gemini, Grok, Llama, DeepSeek, Kimi
Focus on enforcing or extending copyright laws that prevent AI investment in Australia
What. What is “safe frontier AI training”? We clearly have no way of doing this.
Please do not go to Australia and get them to make building datacenters cheaper.
All the other things do seem useful, but this is the first item on the list and seems really bad!
I’m not sure that I follow why ai safety work should be colocated with data centre build outs. I don’t think many ai safety researchers would have much of anything to do with data centre infrastructure, and as far as I can tell they may aswell be located on the moon.
Very positive on a diversification of AI safety opportunities though.
One reason is that hosting data centers can give countries political influence over AI development, increasing the importance of their governments having reasonable views on AI risks.
Exactly! Also:
AI safety researchers tend to have reasonable views on AI safety and can serve as local advisors to governments, which probably trust foreign experts less.
Securing datacenters for SL5 to prevent proliferation of AGI is really hard and likely requires significant AI security expertise in government and local defence contractors.
A regulatory market approach to AI safety (possibly only useful pre-superintelligence) requires competent local auditors, standard-setters, and insurers.
That seems a stronger argument for AI safety policy experts (such as the ones that the aps is beginning to hire) as opposed to safety researchers.
Maybe there’s an argument that policy experts might chat with researchers at local cafes or meetups e.t.c., but it’s quite second order and it seems like a relatively small benefit compared to the wealth of human capital you’d get opening a safety lab somewhere like India.
Yoshua Bengio, Paul Christiano, Geoffrey Irving seem more like technical AI safety experts than AI policy experts, but they arguably have strong influence on governments.
I suspect that some LWers would interpret this as a (bad) argument for countries to build datacenters so they can exercise political control over AGI. I don’t think this works.
What should be done? I think:
Office hubs: Expand SASS, which is in close proximity to the new OpenAI and Anthropic offices in Sydney. Start an AI safety hub in Canberra to support the new AISI. Successful AI safety hubs have benefited from prominent founding member organizations like ARC, CG, Redwood, and MIRI (for Constellation) and Apollo, BlueDot, and MATS (for LISA). Similarly, SASS should bring together orgs like the Gradient Institute and Harmony Intelligence in a shared space, and the new Canberra hub should be built around Good Ancestors. Office hubs can benefit member orgs by providing cheaper returns to scale, hosting shared networking events, facilitating collaboration, and providing a pipeline of strong new recruits in the form of office guests and members.
Training programs: Expand TARA and the Sydney AI Safety Fellowship program, focusing on accelerating top talent and building local mentorship capacity for future programs. Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Academic labs: Build relationships with AI/CS academics at UniMelb, Monash, USyd, ANU, UTS, UNSW, UA, UQ, etc. Help launch AI safety courses like Roy Rinberg and Boaz Barak did at Harvard. Other course inspiration is provided by Stanford and CAIS. Start AI safety academic labs like UC Berkeley CHAI, MIT AAG, NYU ARG, Bau Lab, Stanford HAI, CMU FOCAL, etc.
Conferences: Run an annual AI safety conference like the Australian AI Safety Forum 2024, bringing together academia, industry, government, and nonprofit field-builders. EAGx is probably not enough, as many people from academia, industry, and government likely won’t attend.
Could you clarify? Do you mean that if you have the chance to support someone new who would gain a lot since they haven’t participated in many AI safety programs or the chance to support someone more advanced, you’d suggest picking the later? With the reasoning being that the former might look like a better bet because of more room to make a difference, however boosting the latter increases the supply of mentors and therefore actually ends up benefiting beginners as least as much.
Yes, I would generally support picking the latter as they have a “faster time to mentorship/research leadership/impact” and the field seems currently bottlenecked on mentorship and research leads, not marginal engineers (though individual research leads might feel bottlenecked on marginal engineers).
We should prioritize people who already have research or engineering experience or a very high iteration speed as we are operating under time constraints; AGI is coming soon. Additionally, I think “research taste” will be more important than engineering ability given AI automation and this takes a long time to build; better to select people with existing research experience they can adapt from another field (also promotes interdisciplinary knowledge transfer).
I talk more about it here.
Tom Everitt did his PhD in Australia too. (As did I, FWIW.)
Ramana Kumar!
Perth also exists!
The Perth Machine Learning Group sometimes hosts AI Safety talks or debates. The most recent one had 30 people attend at the Microsoft Office with a wide range of opinions. If anyone is passing through and is interested in meeting up or giving a talk, you can contact me.
There are a decent amount of technical machine learning people in Perth, mainly coming from mining and related industries (Perth is somewhat like the Houston of Australia).
@yanni kyriacos
I am interested. I’ve already been talking to several of the people involved, but I’m Melbourne based so I have been limited in my ability to interact.
@megasilverfist there are quite a few of us based in Melbourne. HMU.