security during takeoff is crucial (probably, depending on how exactly the nonproliferation works)
I think you’re already tracking this but to spell out a dynamic here a bit more: if the US maintains control over what runs on its datacenters and has substantially more compute on one project than any other actor, then it might still be OK for adversaries to have total visibility into your model weights and everything else you do: you just work on a mix of AI R&D and defensive security research with your compute (at a faster rate than they can work on RSI+offense with theirs) until you become protected against spying, and then your greater compute budget means you can do takeoff faster and they only reap the benefits of your models up to a relatively early point. Obviously this is super risky and contingent on offense/defense balance and takeoff speeds and is a terrible position to be in, but I think there’s a good chance it’s kinda viable.
(Also there are some things you can do to differentially advantage yourself even during the regime in which adversaries can see everything you do and steal all your results. Eg your AI does research into a bunch of optimization tricks that are specific to a model of chip the US has almost all of, or studies techniques for making a model that you can’t finetune to pursue different goals without wrecking its capabilities and implements them on the next generation.)
You still care enormously about security over things like “the datacenters are not destroyed” and “the datacenters are running what you think they’re running” and “the human AI researchers are not secretly saboteurs” and so on, of course.
I think you’re already tracking this but to spell out a dynamic here a bit more: if the US maintains control over what runs on its datacenters and has substantially more compute on one project than any other actor, then it might still be OK for adversaries to have total visibility into your model weights and everything else you do: you just work on a mix of AI R&D and defensive security research with your compute (at a faster rate than they can work on RSI+offense with theirs) until you become protected against spying, and then your greater compute budget means you can do takeoff faster and they only reap the benefits of your models up to a relatively early point. Obviously this is super risky and contingent on offense/defense balance and takeoff speeds and is a terrible position to be in, but I think there’s a good chance it’s kinda viable.
(Also there are some things you can do to differentially advantage yourself even during the regime in which adversaries can see everything you do and steal all your results. Eg your AI does research into a bunch of optimization tricks that are specific to a model of chip the US has almost all of, or studies techniques for making a model that you can’t finetune to pursue different goals without wrecking its capabilities and implements them on the next generation.)
You still care enormously about security over things like “the datacenters are not destroyed” and “the datacenters are running what you think they’re running” and “the human AI researchers are not secretly saboteurs” and so on, of course.