“Protecting model weights” is aiming too low, I’d like labs to protect their intellectual property too. Against state actors. This probably means doing engineering work inside an air gapped network, yes.
I feel it’s outside the Overton Window to even suggest this and I’m not sure what to do about that except write a lesswrong shortform I guess.
Anyway, common pushbacks:
“Employees move between companies and we can’t prevent them sharing what they know”: In the IDF we had secrets in our air gapped network which people didn’t share because they understood it’s important. I think lab employees could also understand it’s important. I’m not saying this works perfectly, but it works well enough for nation states to do when they’re taking security seriously.
“Working in an air gapped network is annoying”: Yeah 100%, but it’s doable, and there are many things to do to make it more comfortable. I worked for about 6 years as a developer in an air gapped network.
Also, a note of hope: I think It’s not crazy for labs to aim for a development environment that is world leading in the tradeoff between convenience and security. I don’t know what the U.S has to offer in terms of a ready made air gapped development environment, but I can imagine, for example, Anthropic being able to build something better if they take this project seriously, or at least build some parts really well before the U.S government comes to fill in the missing parts. Anyway, that’s what I’d aim for
The case against a focus on algorithmic secret security is that this will emphasize and excuse a lower level of transparency which is potentially pretty bad.
Edit: to be clear, I’m uncertain about the overall bottom line.
Still, even if some parts of the architecture are public, it seems good to keep many details private, details that took the lab months/years to figure out? Seems like a nice moat
Yes, ideally, but it might be hard to have a relatively more nuanced message. Like in the ideal world an AI company would have algorithmic secret security, disclose things that passed cost-benefit for disclosure, and generally do good stuff on transparancy.
I think that as AI tools become more useful, working in an air gapped network is going to require a larger compromise in productivity. Maybe AI labs are the exception here as they can deploy their own products in the air gapped network, but that depends on how much of the productivity gains they can replicate using their own products. i.e. an Anthropic employee might not be able to use Cursor unless Anthropic signs a deal with Cursor to deploy it inside the network. Now do this with 10 more products, this requires infrastructure and compute that might be just too much of a hassle for the company.
I hope we can make the compromise not too painful. Especially if we start early and address all the problems that will come up before we’re in the critical period where we can’t afford to mess up anymore.
Imagine a lab starts working in an air gapped network, and one of the 1000 problems that comes up is working-from-home.
If that problem comes up now (early), then we can say “okay, working from home is allowed”, and we’ll add that problem to the queue of things that we’ll prioritize and solve. We can also experiment with it: Maybe we can open another secure office closer to the employee’s house, would they like that? If so, we could discuss fancy ways to secure the communication between the offices. If not, we can try something else.
If that problem comes up when security is critical (if we wait), then the solution will be “no more working from home, period”. The security staff will be too overloaded with other problems to solve, not available to experiment with having another office nor to sign a deal with Cursor.
“Protecting model weights” is aiming too low, I’d like labs to protect their intellectual property too. Against state actors. This probably means doing engineering work inside an air gapped network, yes.
I feel it’s outside the Overton Window to even suggest this and I’m not sure what to do about that except write a lesswrong shortform I guess.
Anyway, common pushbacks:
“Employees move between companies and we can’t prevent them sharing what they know”: In the IDF we had secrets in our air gapped network which people didn’t share because they understood it’s important. I think lab employees could also understand it’s important. I’m not saying this works perfectly, but it works well enough for nation states to do when they’re taking security seriously.
“Working in an air gapped network is annoying”: Yeah 100%, but it’s doable, and there are many things to do to make it more comfortable. I worked for about 6 years as a developer in an air gapped network.
Also, a note of hope: I think It’s not crazy for labs to aim for a development environment that is world leading in the tradeoff between convenience and security. I don’t know what the U.S has to offer in terms of a ready made air gapped development environment, but I can imagine, for example, Anthropic being able to build something better if they take this project seriously, or at least build some parts really well before the U.S government comes to fill in the missing parts. Anyway, that’s what I’d aim for
The case against a focus on algorithmic secret security is that this will emphasize and excuse a lower level of transparency which is potentially pretty bad.
Edit: to be clear, I’m uncertain about the overall bottom line.
Ah, interesting
Still, even if some parts of the architecture are public, it seems good to keep many details private, details that took the lab months/years to figure out? Seems like a nice moat
Yes, ideally, but it might be hard to have a relatively more nuanced message. Like in the ideal world an AI company would have algorithmic secret security, disclose things that passed cost-benefit for disclosure, and generally do good stuff on transparancy.
I don’t think this is too nuanced for a lab that understands the importance of security here and wants a good plan (?)
Some hands-on experience with software development without an internet connection, from @niplav , which seems somewhat relevant :
https://www.lesswrong.com/posts/jJ9Hx8ETz5gWGtypf/how-do-you-deal-w-super-stimuli?commentId=3KnBTp6wGYRfgzyF2
I think that as AI tools become more useful, working in an air gapped network is going to require a larger compromise in productivity. Maybe AI labs are the exception here as they can deploy their own products in the air gapped network, but that depends on how much of the productivity gains they can replicate using their own products. i.e. an Anthropic employee might not be able to use Cursor unless Anthropic signs a deal with Cursor to deploy it inside the network. Now do this with 10 more products, this requires infrastructure and compute that might be just too much of a hassle for the company.
Yeah it will compromise productivity.
I hope we can make the compromise not too painful. Especially if we start early and address all the problems that will come up before we’re in the critical period where we can’t afford to mess up anymore.
I also think it’s worth it
More on starting early:
Imagine a lab starts working in an air gapped network, and one of the 1000 problems that comes up is working-from-home.
If that problem comes up now (early), then we can say “okay, working from home is allowed”, and we’ll add that problem to the queue of things that we’ll prioritize and solve. We can also experiment with it: Maybe we can open another secure office closer to the employee’s house, would they like that? If so, we could discuss fancy ways to secure the communication between the offices. If not, we can try something else.
If that problem comes up when security is critical (if we wait), then the solution will be “no more working from home, period”. The security staff will be too overloaded with other problems to solve, not available to experiment with having another office nor to sign a deal with Cursor.