Is true precommitment possible at all?
Human-wise this is an easy question, human will isn’t perfect, but what about an AI? It seems to me that “true precommitment” would require the AI to come up with a probability 100% when it arrives at the decision to precommit, which means at least one prior was 100% and that in turn means no update is possible for this prior.
I for one can’t agree with the point that transparency does any good in security assessment if we consider implementation of a complex system (design has its own rules though). I believe you underestimate how broken a human mind really is.
Transparency == Priming
The team which does security review of the system will utterly fail the moment they get their hands on the source code, due to suggestion/priming effects.
comments in source—I won’t even argue
variable and function names will suggest what this part of code “is for”. E.g code could say “it was (initially) written to compute X for which it is correct” except that later on it was also made to compute Y and Z where it catches fire 0.01% of the time.
whitespace. E.g. missing braces because indentation suggested otherwise (yes, really, seen this one)
If you truly consider removing all metadata from the code—the code looses half of its transparency already, so “transparency” doesn’t quite apply. Any of that metadata will cause the security team to drop some testing effort. Those tests won’t even be designed/written, not to mention “caring enough to make an actual effort”. Otoh, if a program passes serious (more below) black-box testing, it doesn’t need to pass anything else. Transparency is simply unnecessary.
Hardcore solution to security-critical systems:
Have a design of the system (this has issues on its own, not covered here)
Get two teams of programmers
Have both teams implement and debug the system, separately, without any communication
Get both teams to review the other teams binary (black-box). If bugs found goto 3
Obfuscate both binaries (tools easily available) and have both teams review their own (obfuscated) binary while believing it’s the other team’s binary. If bugs found, goto 3
At this point you could open up sources and have a final review (“transparency”) but honestly… what’s the point?
Yes, it’s paranoid. Systems can be split into smaller parts though—built and tested separately—so it’s not as monumental an effort as it seems.