I’ve recently started to read a textbook by Hilbert. Consider this a rookie’s attempt at formality, where a short paragraph of normal sentences would suffice to express the same idea. Feel free to mutate or mutilate it.
Assumptions
If someone has a Giant Anteater, it can be used to find flaws in OpenSSL. | Empirically demonstrated
If you can build a Giant Anteater, so can others. | Assumption of a shared capability front
If a Giant Anteater can find flaws in OpenSSL, it can find flaws in most other OSS of equal or lower quality. | Assumption of a generalized capability
OpenSSL is of high quality. | Assumption of a relevant instance
If enough people possess the ability to attack various high-quality OSS, many relevant systems will be targeted and compromised. | Assumption of the presence of malevolent intent in a subset of any large enough set of people
Derivation
Given: You can build a Giant Anteater. | apply 2
Others have a Giant Anteater. | apply 1
Others can find flaws in OpenSSL. | apply 3
Others can find flaws in OSS of equal or lower quality than OpenSSL. | apply 4
Others can find flaws in high-quality OSS. | apply 5
Either many relevant systems are already targeted and compromised or they will be as soon as enough people catch up.
We do this to secure the software infrastructure of human civilization before strong AI systems become ubiquitous. Prosaically, we want to make sure we don’t get hacked into oblivion the moment they come online.
Given your existence proof (the Giant Anteater) and its implication (malevolent actors will acquire similar capabilities and apply them soon), your system and its successors seem to require rapid and widespread application, which in turn may require scaling up the underlying processes, possibly by distributing them to other trustworthy actors with benevolent intent and sufficient resources.
CC[C@@H]1[C@@]([C@@H]([C@H](N(C[C@@H](C[C@@]([C@@H]([C@H]([C@@H]([C@H](C(=O)O1)C)O[C@H]2C[C@@]([C@H]([C@@H](O2)C)O)(C)OC)C)O[C@H]3[C@@H]([C@H](C[C@H](O3)C)N(C)C)O)(C)O)C)C)C)O)(C)O