Yes, Buck, thank you for responding! A robust whitelist (especially hardware level, each GPU can become a computer for securing itself) potentially solves it (of course if there will be some state-level actors, it can potentially be broken, but at least millions of consumer GPUs will be protected). Each GPU is a battleground, we want to increase current 0% security, to above 0 on as many GPUs as possible, first in firmware (and on OS level) because updating online is easy, then in hardware (can bring much better security).
In the safest possible implementation, I imagine it as Apple App Store (or Nintendo online game shop): the AI models become a bit like apps, they run on the GPU internally, NVIDIA looks after them (they ping NVIDIA servers constantly or at least every few days to recheck the lists and update the security).
NVIDIA can be super motivated to have robust safety: they’ll be able to get old hardware for cheap and sell new non-agentic GPUs (so they’ll double their business) and have commissions like Apple does (so every GPU becomes a service business for NVIDIA, with constant cashflow, of course there will be free models, like free apps in the App Store, but each developer will be at least registered and so not some anonymous North Korean hacker), they’ll find a way to make things very secure.
The ultimate test is this: can NVIDIA sell their non-agentic super-secure GPUs to North Korea without any risks? I think it’s possible to have even some simple self-destruct mechanisms in case of attempted tampering.
But lets not make the perfect be the enemy of good. Right now we have nukes in each computer (GPUs) that are 100% unprotected at all. At least blacklists will already be better than nothing, and with new secure hardware, it can really slow down AI agents from spreading, so we can be 50% sure we’ll have 99% security in most cases but it can become better and better (same way first computers were buggy and completely insecure but we started to make them more and more secure, at least gradually).
Let’s not give up because we are not 100% sure we’ll have 100% security) We’ll probably never have that we can only have a path towards it that seems reasonable enough. We need rich allies, incentives that are aligned with us and with safety.
Yes, Buck, thank you for responding! A robust whitelist (especially hardware level, each GPU can become a computer for securing itself) potentially solves it (of course if there will be some state-level actors, it can potentially be broken, but at least millions of consumer GPUs will be protected). Each GPU is a battleground, we want to increase current 0% security, to above 0 on as many GPUs as possible, first in firmware (and on OS level) because updating online is easy, then in hardware (can bring much better security).
In the safest possible implementation, I imagine it as Apple App Store (or Nintendo online game shop): the AI models become a bit like apps, they run on the GPU internally, NVIDIA looks after them (they ping NVIDIA servers constantly or at least every few days to recheck the lists and update the security).
NVIDIA can be super motivated to have robust safety: they’ll be able to get old hardware for cheap and sell new non-agentic GPUs (so they’ll double their business) and have commissions like Apple does (so every GPU becomes a service business for NVIDIA, with constant cashflow, of course there will be free models, like free apps in the App Store, but each developer will be at least registered and so not some anonymous North Korean hacker), they’ll find a way to make things very secure.
The ultimate test is this: can NVIDIA sell their non-agentic super-secure GPUs to North Korea without any risks? I think it’s possible to have even some simple self-destruct mechanisms in case of attempted tampering.
But lets not make the perfect be the enemy of good. Right now we have nukes in each computer (GPUs) that are 100% unprotected at all. At least blacklists will already be better than nothing, and with new secure hardware, it can really slow down AI agents from spreading, so we can be 50% sure we’ll have 99% security in most cases but it can become better and better (same way first computers were buggy and completely insecure but we started to make them more and more secure, at least gradually).
Let’s not give up because we are not 100% sure we’ll have 100% security) We’ll probably never have that we can only have a path towards it that seems reasonable enough. We need rich allies, incentives that are aligned with us and with safety.