Training frontier models needs a lot of chips, situations where “a chip notices something” (and any self-destruct type things) are unimportant because you can test on fewer chips and do it differently next time. Complicated ways of circumventing verification or resetting clocks are not useful if they are too artisan, they need to be applied to chips in bulk and those chips then need to be able to work for weeks in a datacenter without further interventions (that can’t be made into part of the datacenter).
AI accelerator chips have 80B+ transistors, much more than an instance of certificate verification circuitry would need, so you can place multiple of them (and have them regularly recheck the certificates). There are EUV pitch metal connections several layers deep within a chip, you’d need to modify many of them all over the chip without damaging the layers above, so I expect this to be completely infeasible to do for 10K+ chips on general principle (rather than specific knowledge of how any of this works).
For clocks or counters, I guess AI accelerators normally don’t have any rewritable persistent memory at all, and I don’t know how hard it would be to add some in a way that makes it too complicated to keep resetting automatically.
My guess is that AI accelerators will have some difficult-to-modify persistent memory based on similar chips having it, but I’m not sure if it would be on the same die or not. I wrote more about how a firmware-based implementation of Offline Licensing might use H100 secure memory, clocks, and secure boot here: https://arxiv.org/abs/2404.18308
Thanks! Could you say more about your confidence in this?
Yes, specifically I don’t want an attacker to reliably be able to reset it to whatever value it had when it sent the last challenge.
If the attacker can only reset this memory to 0 (for example, by unplugging it) - then the chip can notice that’s suspicious.
Another option is a reliable wall clock (though this seems less promising).
I think @jamesian told me about a reliable clock (in the sense of the clock signal used by chips, not a wall clock), I’ll ask
Training frontier models needs a lot of chips, situations where “a chip notices something” (and any self-destruct type things) are unimportant because you can test on fewer chips and do it differently next time. Complicated ways of circumventing verification or resetting clocks are not useful if they are too artisan, they need to be applied to chips in bulk and those chips then need to be able to work for weeks in a datacenter without further interventions (that can’t be made into part of the datacenter).
AI accelerator chips have 80B+ transistors, much more than an instance of certificate verification circuitry would need, so you can place multiple of them (and have them regularly recheck the certificates). There are EUV pitch metal connections several layers deep within a chip, you’d need to modify many of them all over the chip without damaging the layers above, so I expect this to be completely infeasible to do for 10K+ chips on general principle (rather than specific knowledge of how any of this works).
For clocks or counters, I guess AI accelerators normally don’t have any rewritable persistent memory at all, and I don’t know how hard it would be to add some in a way that makes it too complicated to keep resetting automatically.
My guess is that AI accelerators will have some difficult-to-modify persistent memory based on similar chips having it, but I’m not sure if it would be on the same die or not. I wrote more about how a firmware-based implementation of Offline Licensing might use H100 secure memory, clocks, and secure boot here: https://arxiv.org/abs/2404.18308