I’m skeptical to which extent the latter can be done. That’s like saying an AI lab should suddenly care about AI safety. One can’t really bolt a security mandate onto an existing institution and expect a competent result.
I’m skeptical to which extent the latter can be done. That’s like saying an AI lab should suddenly care about AI safety. One can’t really bolt a security mandate onto an existing institution and expect a competent result.