Given this as a foundation, I wonder if it’d be possible to make systems that report potentially dangerously high concentrations of compute, places where an abnormally large amount of hardware is running abnormally hot, in an abnormally densely connected network (where members are communicating with very low latency, suggesting that they’re all in the same datacenter).
Could it be argued that potentially dangerous ML projects will usually have that characteristic, and that ordinary distributed computations (EG, multiplayer gaming) will not? If so, a system like this could expose unregistered ML projects without imposing any loss of privacy on ordinary users.
I think this depends a lot on the use case. I envision for the most part this would be used in/on large known clusters of computation, as an independent check on computation usage and a failsafe. In that case it will be pretty easy to distinguish from other uses like gaming or cryptocurrency mining. If we’re in the regime where we’re worried about sneaky efforts to assemble lots of GPUs under the radar and do ML with them, then I’d expect there would be pattern analysis methods that could be used as you suggest, or the system could be set up to feed back more information than just computation usage.
Given this as a foundation, I wonder if it’d be possible to make systems that report potentially dangerously high concentrations of compute, places where an abnormally large amount of hardware is running abnormally hot, in an abnormally densely connected network (where members are communicating with very low latency, suggesting that they’re all in the same datacenter).
Could it be argued that potentially dangerous ML projects will usually have that characteristic, and that ordinary distributed computations (EG, multiplayer gaming) will not? If so, a system like this could expose unregistered ML projects without imposing any loss of privacy on ordinary users.
I think this depends a lot on the use case. I envision for the most part this would be used in/on large known clusters of computation, as an independent check on computation usage and a failsafe. In that case it will be pretty easy to distinguish from other uses like gaming or cryptocurrency mining. If we’re in the regime where we’re worried about sneaky efforts to assemble lots of GPUs under the radar and do ML with them, then I’d expect there would be pattern analysis methods that could be used as you suggest, or the system could be set up to feed back more information than just computation usage.