The case against a focus on algorithmic secret security is that this will emphasize and excuse a lower level of transparency which is potentially pretty bad.
Edit: to be clear, I’m uncertain about the overall bottom line.
Still, even if some parts of the architecture are public, it seems good to keep many details private, details that took the lab months/years to figure out? Seems like a nice moat
Yes, ideally, but it might be hard to have a relatively more nuanced message. Like in the ideal world an AI company would have algorithmic secret security, disclose things that passed cost-benefit for disclosure, and generally do good stuff on transparancy.
The case against a focus on algorithmic secret security is that this will emphasize and excuse a lower level of transparency which is potentially pretty bad.
Edit: to be clear, I’m uncertain about the overall bottom line.
Ah, interesting
Still, even if some parts of the architecture are public, it seems good to keep many details private, details that took the lab months/years to figure out? Seems like a nice moat
Yes, ideally, but it might be hard to have a relatively more nuanced message. Like in the ideal world an AI company would have algorithmic secret security, disclose things that passed cost-benefit for disclosure, and generally do good stuff on transparancy.
I don’t think this is too nuanced for a lab that understands the importance of security here and wants a good plan (?)