Ever since NIST put a backdoor in Dual Elliptic Curve Deterministic Random Bit Generator, they have the problem that many people no longer trust them.
I guess it might be possible to backdoor AI Safety Evaluations (e.g. suppose there is some know very dangerous thing that National Security Agency is doing with AI, and NIST deliberately rigs their criteria to not stop this very dangerous thing).
But apart from the total loss of public trust in them as an institution, NIST has done ground-breaking work in the computer security field in the past, so it wouldn’t be so unusual for them to develop AI criteria.
The whole dual elliptic curve fiasco is possibly a lesson that criteria should be developed by international groups, because a single country’s standards body, like NIST, can be subverted by their spy agencies.
for example. Although that paper is more about, “Given that NIST has deliberately subverted the standard, how did actual products also get subverted to exploit the weakness that NIST introduced.”
And the really funny bit is NIST deliberately subverted the standard so that an organization who knew the master key (probably NSA) could break the security of the system. And then, in actualt implementation, the master key was changed so that someone else could break into everyone’s system And, officially at least, we have no idea who that someone is. Probably Chinese government. Could be organized crime, though probably unlikely.
The movie Sneakers had this as its plots years ago.. US government puts a secret backdoor in everyone’s computer system .. and, then, uh,, someone steals the key to that backdoor;
But anyway, yes, it is absolutely NISTs fault that they unintentionally gave the Chinese government backdoor access into US government computers.
Ever since NIST put a backdoor in Dual Elliptic Curve Deterministic Random Bit Generator, they have the problem that many people no longer trust them.
I guess it might be possible to backdoor AI Safety Evaluations (e.g. suppose there is some know very dangerous thing that National Security Agency is doing with AI, and NIST deliberately rigs their criteria to not stop this very dangerous thing).
But apart from the total loss of public trust in them as an institution, NIST has done ground-breaking work in the computer security field in the past, so it wouldn’t be so unusual for them to develop AI criteria.
The whole dual elliptic curve fiasco is possibly a lesson that criteria should be developed by international groups, because a single country’s standards body, like NIST, can be subverted by their spy agencies.
Do you have quick links for the elliptic curve backdoor and/or any ground-breaking work in computer security that NIST has performed?
https://cacm.acm.org/research/technical-perspective-backdoor-engineering/
for example. Although that paper is more about, “Given that NIST has deliberately subverted the standard, how did actual products also get subverted to exploit the weakness that NIST introduced.”
And the really funny bit is NIST deliberately subverted the standard so that an organization who knew the master key (probably NSA) could break the security of the system. And then, in actualt implementation, the master key was changed so that someone else could break into everyone’s system And, officially at least, we have no idea who that someone is. Probably Chinese government. Could be organized crime, though probably unlikely.
The movie Sneakers had this as its plots years ago.. US government puts a secret backdoor in everyone’s computer system .. and, then, uh,, someone steals the key to that backdoor;
But anyway, yes, it is absolutely NISTs fault that they unintentionally gave the Chinese government backdoor access into US government computers.