Latacora might be of interest to some AI Safety organizations

Link post

Am I affiliated with Latacora?

No, not in any way.

What is Latacora?

From their own webpage:

Latacora does just one kind of engagement: we join your engineering team virtually, create a security practice, and run it. As your company matures, we’ll make sure that practice also matures. When it makes sense for you to bring some (or all!) of our security capabilities in-house, we’ll help you make those hires.

Why might some AI Safety organizations be interested in Latacora?

Security is hard. Only amateurs roll their own security. But identifying good security hires and practices is hard if you don’t know what they are or how they look like. This is similar to how it’s difficult to hire for taste if you don’t have taste. In particular, I’d expect top AI Safety hires to be great at AI Safety, but not necessarily great at security practices. But even if they are great at security practices, it’s probably not their comparative advantage.

Recently, Yudkowsky has been talking about third countries stealing dangerous AI models and running them as a consideration feeding into his own pessimism. This doesn’t seem like a hypothetical concern. Latacora seems like it would allow one to solve or mitigate this problem by throwing money at it.

How did you hear about Latacora? Why do you think they’re good?

I don’t remember how I heard about them. It was probably mentioned by someone grungier than me in some Linux forum, and I’ve been following their blog for a while. I think they’re good because of the same reasons I think Zvi is good: because I read their stuff and they seem to say sensible things according to my judgment.

Are you suggesting that AI Safety organizations literally hire these people?

Yes.