There is another two verification routines, which is looking over shoulders at internal documents, and banning new releases.
There is also just checking for the presense of the registered weights in the gpu’s local rap, which produces a memory tax big enough that there is not space to fit the intermediate values of the gradient. This requires that the monitors have code on the running machines, but periodic memory dumps can be a surveilance mechanism by verifying that all models on gpus match a model known to be already trained, which stops new initialization and thus new runs.
If there is weirder physics, such that FTL or relaxations to the laws of thermodynamics are possible, I assume that the estimation increases. Then again, under those conditions there may not be a finite upper bound.