I’m not sure that I follow why ai safety work should be colocated with data centre build outs. I don’t think many ai safety researchers would have much of anything to do with data centre infrastructure, and as far as I can tell they may aswell be located on the moon.
Very positive on a diversification of AI safety opportunities though.
One reason is that hosting data centers can give countries political influence over AI development, increasing the importance of their governments having reasonable views on AI risks.
AI safety researchers tend to have reasonable views on AI safety and can serve as local advisors to governments, which probably trust foreign experts less.
Securing datacenters for SL5 to prevent proliferation of AGI is really hard and likely requires significant AI security expertise in government and local defence contractors.
A regulatory market approach to AI safety (possibly only useful pre-superintelligence) requires competent local auditors, standard-setters, and insurers.
That seems a stronger argument for AI safety policy experts (such as the ones that the aps is beginning to hire) as opposed to safety researchers.
Maybe there’s an argument that policy experts might chat with researchers at local cafes or meetups e.t.c., but it’s quite second order and it seems like a relatively small benefit compared to the wealth of human capital you’d get opening a safety lab somewhere like India.
Yoshua Bengio, Paul Christiano, Geoffrey Irving seem more like technical AI safety experts than AI policy experts, but they arguably have strong influence on governments.
I suspect that some LWers would interpret this as a (bad) argument for countries to build datacenters so they can exercise political control over AGI. I don’t think this works.
I’m not sure that I follow why ai safety work should be colocated with data centre build outs. I don’t think many ai safety researchers would have much of anything to do with data centre infrastructure, and as far as I can tell they may aswell be located on the moon.
Very positive on a diversification of AI safety opportunities though.
One reason is that hosting data centers can give countries political influence over AI development, increasing the importance of their governments having reasonable views on AI risks.
Exactly! Also:
AI safety researchers tend to have reasonable views on AI safety and can serve as local advisors to governments, which probably trust foreign experts less.
Securing datacenters for SL5 to prevent proliferation of AGI is really hard and likely requires significant AI security expertise in government and local defence contractors.
A regulatory market approach to AI safety (possibly only useful pre-superintelligence) requires competent local auditors, standard-setters, and insurers.
That seems a stronger argument for AI safety policy experts (such as the ones that the aps is beginning to hire) as opposed to safety researchers.
Maybe there’s an argument that policy experts might chat with researchers at local cafes or meetups e.t.c., but it’s quite second order and it seems like a relatively small benefit compared to the wealth of human capital you’d get opening a safety lab somewhere like India.
Yoshua Bengio, Paul Christiano, Geoffrey Irving seem more like technical AI safety experts than AI policy experts, but they arguably have strong influence on governments.
I suspect that some LWers would interpret this as a (bad) argument for countries to build datacenters so they can exercise political control over AGI. I don’t think this works.