Here is a list of models of mine that overall paint the picture of “government has a lot of advantages over industry, enough that I expect government to be a lot better at keeping secrets”. I feel overall relatively confident in this conclusion, though in the absence of hard empirical data I probably wouldn’t go over 80% confidence.
Importance of obfuscation
The more I think about the difficulty of keeping secrets, the more the importance of active obfuscation seems clear to me. From what I understand in most military scenarios, it is rarely the case that the real planned strategy did not reach the enemy, instead the correct strategy was only one of many reports that the enemy received, and the enemy was often unable to distinguish the correct reports from the false ones.
Another example is the registration of patents. U.S. patents are public, but as I understand it many major companies often register dozens of fake patents to avoid others from being able to predict their next product. The difficulty is often in distinguishing the correct patent from all the fake and useless patents that a company registers.
The key difficulty of obfuscation is that lying is difficult. It is hard to manufacture some false statement about reality that isn’t contradicted by some more readily available fact about reality, though this depends on the domain of the secret. I think there are two major categories of secrets, “strategic secrets” and “external secrets”.
Strategic secrets are secrets about which strategy you are going to pursue. In this case, obfuscation is often easy, and bluffing is a very common occurrence. It’s rare that someone has easily verifiable facts about your psychology or decision making process that help them rule out all but one of your potential strategies, and this kind of secret is the basis of most adversarial games and their resulting strategies.
External secrets are facts about external reality that you want to prevent from becoming known by someone else. These are much harder to keep secret, and coming up with good obfuscations is often difficult since to confuse your enemy you need to create a hypothesis that is plausible, but not easily falsifiable. Depending on the domain, this might be quite difficult.
An example of the difficulty of obfuscation or deception that I am familiar with is the manufacturing of fake videos for speedrunning purposes. The speedrunning community has repeated experience with people trying to splice together video segments of a speed run into a run of the full game, or using emulators to make perfectly timed inputs at the correct moments. However, new ways to verify the veracity of a speedrun are constantly being developed including audio analysis to notice sharp transitions that would not occur in natural video footage, or analyses of the input limitations of standard controllers. Since new ways of identifying fake speedruns are constantly being developed, someone who is trying to fake a video will find it almost impossible to account for all of these and produce a video that will hold up as false for a long period of time.
The difficulty of preventing information leaks
My current model of cybersecurity suggests that it is almost impossible to keep any information stored on any computer with public internet access private, and that even for air-gapped computers keeping information on them secure is still extremely difficult.
My current model is that in cybersecurity, offense is vastly easier than defense and that you can likely acquire the relevant vulnerabilities to breach any specific machine for less than $10 million (based on the cost of a zero-day exploit for most operating systems being around $1 million dollars). The primary obstacle to cybersecurity is likely other people knowing precisely what information they are looking for, and identifying the precise targets that should be breached.
Even air-gapped computers are not safe from attacks, as the Stuxnet attack demonstrated, which compromised multiple nuclear power plants in Iran even though the plants were completely cut off from the internet.
My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.
Cost of verification:
Another major determinant of the difficulty of keeping a secret is the cost for others to verify that it is indeed true. This is particularly important in the context of active obfuscation. Here are some concrete examples:
You have a spy-plane with a maximum altitude of 10km that you want to keep secret. An enemy nation state receives that information, but also received false reports that you intentionally dissiminated that your maximum altitude is actually 15km, 8km or 5km. The ability to make use of your secret is now determined by the enemies ability to differentiate the correct hypothesis from the fictional ones. They basically have two ways of achieving this:
1. They have some ability knowledge of your strategy or psychology that allows them to infer which report is the correct one. E.g. if one of the reports was extracted from a highly guarded facility, and the other ones were suspiciously sent in via email from anonymous sources.
2. They can perform some experiment or some inference that allows them to distinguish between the competing hypotheses. The cost of this can differ a lot between different secrets. If they have access to a file of yours that is encoded with a cryptographic key, the cost of verifying that a single key is the correct one is probably neglegible, so finding the correct one from 5 keys is likely trivial.
In the spy-plane example, your enemy might be able to leverage their models of your manufacturing process to rule out some hypotheses (maybe because they think it’s currently impossible to build a spy-plane with a maximum altitude of 15 km). Or they might be able to combine some other knowledge they have about your plane, like the shape of its wings, to rule out certain numbers. But they will almost certainly struggle a lot more with verifying or falsifying one of your maximum altitude numbers, than they will with verifying the correct cryptographic key for a file of yours they stole.
Power theory:
The ability of an actor to prevent the release of a secret is primarily determined by the negative consequences they can inflict on someone else if they catch you trying to release the secret and the incentive the other actor has to get access to the secret.
In this model, governments have a variety of significant advantages over corporations. In particular, the government has access to guns and the ability to threaten violence, and the threat of serious prison time if you release governmental secrets.
Corporations usually only have access to civil litigation, and while this provides some protection and likely prevents whole companies in your country to spring into existence that have no purpose but to steal your technology, but still leaves a lot of room for international competitors to steal your secrets with little fear of consequences, and allows companies to take strategic risks for industrial espionage by just budgeting resources for the consequences of potential litigation.
In some situations corporations have more power over each other, which presumably allows them to keep better secrets. If you have a highly integrated industry in which many companies rely on each other’s products, then the threat of severing those ties might limit adversarial action like releasing secrets. An example here might be the graphics-card manufacturing industry, which tends to only have two big players (Intel and AMD) and many industry actors rely on good relations to one of these companies to function. As such, they are less likely to steal and use secrets from AMD or Intel, for fear of retribution from those organizations.
However, it might be the case that because of the delicacy of international relations, the ability for one country to punish another country for releasing secrets might actually be more limited than the ability for one international company to punish another international company. An example might be two international companies that compete in the same market, that have coordinated on pricing or splitting markets, and could renege on those agreements if they notice a defection by the other.
Concrete example: Mutually assured destruction via patents.
I have the cashed model that many of the worlds biggest software companies are well aware that they are constantly infringing on each others patents, and that they coordinate around that by agreeing to not pursue the violation of those patents.
This also enables a pretty natural way of punishment in case one party does clearly violate the terms of the agreement, and allows the other party to threaten to enforce their patents if one party decides to release the other party’s secret.
My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.
Here is a list of models of mine that overall paint the picture of “government has a lot of advantages over industry, enough that I expect government to be a lot better at keeping secrets”. I feel overall relatively confident in this conclusion, though in the absence of hard empirical data I probably wouldn’t go over 80% confidence.
Importance of obfuscation
The more I think about the difficulty of keeping secrets, the more the importance of active obfuscation seems clear to me. From what I understand in most military scenarios, it is rarely the case that the real planned strategy did not reach the enemy, instead the correct strategy was only one of many reports that the enemy received, and the enemy was often unable to distinguish the correct reports from the false ones.
Another example is the registration of patents. U.S. patents are public, but as I understand it many major companies often register dozens of fake patents to avoid others from being able to predict their next product. The difficulty is often in distinguishing the correct patent from all the fake and useless patents that a company registers.
The key difficulty of obfuscation is that lying is difficult. It is hard to manufacture some false statement about reality that isn’t contradicted by some more readily available fact about reality, though this depends on the domain of the secret. I think there are two major categories of secrets, “strategic secrets” and “external secrets”.
Strategic secrets are secrets about which strategy you are going to pursue. In this case, obfuscation is often easy, and bluffing is a very common occurrence. It’s rare that someone has easily verifiable facts about your psychology or decision making process that help them rule out all but one of your potential strategies, and this kind of secret is the basis of most adversarial games and their resulting strategies.
External secrets are facts about external reality that you want to prevent from becoming known by someone else. These are much harder to keep secret, and coming up with good obfuscations is often difficult since to confuse your enemy you need to create a hypothesis that is plausible, but not easily falsifiable. Depending on the domain, this might be quite difficult.
An example of the difficulty of obfuscation or deception that I am familiar with is the manufacturing of fake videos for speedrunning purposes. The speedrunning community has repeated experience with people trying to splice together video segments of a speed run into a run of the full game, or using emulators to make perfectly timed inputs at the correct moments. However, new ways to verify the veracity of a speedrun are constantly being developed including audio analysis to notice sharp transitions that would not occur in natural video footage, or analyses of the input limitations of standard controllers. Since new ways of identifying fake speedruns are constantly being developed, someone who is trying to fake a video will find it almost impossible to account for all of these and produce a video that will hold up as false for a long period of time.
The difficulty of preventing information leaks
My current model of cybersecurity suggests that it is almost impossible to keep any information stored on any computer with public internet access private, and that even for air-gapped computers keeping information on them secure is still extremely difficult.
My current model is that in cybersecurity, offense is vastly easier than defense and that you can likely acquire the relevant vulnerabilities to breach any specific machine for less than $10 million (based on the cost of a zero-day exploit for most operating systems being around $1 million dollars). The primary obstacle to cybersecurity is likely other people knowing precisely what information they are looking for, and identifying the precise targets that should be breached.
Even air-gapped computers are not safe from attacks, as the Stuxnet attack demonstrated, which compromised multiple nuclear power plants in Iran even though the plants were completely cut off from the internet.
My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.
Cost of verification:
Another major determinant of the difficulty of keeping a secret is the cost for others to verify that it is indeed true. This is particularly important in the context of active obfuscation. Here are some concrete examples:
You have a spy-plane with a maximum altitude of 10km that you want to keep secret. An enemy nation state receives that information, but also received false reports that you intentionally dissiminated that your maximum altitude is actually 15km, 8km or 5km. The ability to make use of your secret is now determined by the enemies ability to differentiate the correct hypothesis from the fictional ones. They basically have two ways of achieving this:
1. They have some ability knowledge of your strategy or psychology that allows them to infer which report is the correct one. E.g. if one of the reports was extracted from a highly guarded facility, and the other ones were suspiciously sent in via email from anonymous sources.
2. They can perform some experiment or some inference that allows them to distinguish between the competing hypotheses. The cost of this can differ a lot between different secrets. If they have access to a file of yours that is encoded with a cryptographic key, the cost of verifying that a single key is the correct one is probably neglegible, so finding the correct one from 5 keys is likely trivial.
In the spy-plane example, your enemy might be able to leverage their models of your manufacturing process to rule out some hypotheses (maybe because they think it’s currently impossible to build a spy-plane with a maximum altitude of 15 km). Or they might be able to combine some other knowledge they have about your plane, like the shape of its wings, to rule out certain numbers. But they will almost certainly struggle a lot more with verifying or falsifying one of your maximum altitude numbers, than they will with verifying the correct cryptographic key for a file of yours they stole.
Power theory:
The ability of an actor to prevent the release of a secret is primarily determined by the negative consequences they can inflict on someone else if they catch you trying to release the secret and the incentive the other actor has to get access to the secret.
In this model, governments have a variety of significant advantages over corporations. In particular, the government has access to guns and the ability to threaten violence, and the threat of serious prison time if you release governmental secrets.
Corporations usually only have access to civil litigation, and while this provides some protection and likely prevents whole companies in your country to spring into existence that have no purpose but to steal your technology, but still leaves a lot of room for international competitors to steal your secrets with little fear of consequences, and allows companies to take strategic risks for industrial espionage by just budgeting resources for the consequences of potential litigation.
In some situations corporations have more power over each other, which presumably allows them to keep better secrets. If you have a highly integrated industry in which many companies rely on each other’s products, then the threat of severing those ties might limit adversarial action like releasing secrets. An example here might be the graphics-card manufacturing industry, which tends to only have two big players (Intel and AMD) and many industry actors rely on good relations to one of these companies to function. As such, they are less likely to steal and use secrets from AMD or Intel, for fear of retribution from those organizations.
However, it might be the case that because of the delicacy of international relations, the ability for one country to punish another country for releasing secrets might actually be more limited than the ability for one international company to punish another international company. An example might be two international companies that compete in the same market, that have coordinated on pricing or splitting markets, and could renege on those agreements if they notice a defection by the other.
Concrete example: Mutually assured destruction via patents.
I have the cashed model that many of the worlds biggest software companies are well aware that they are constantly infringing on each others patents, and that they coordinate around that by agreeing to not pursue the violation of those patents.
This also enables a pretty natural way of punishment in case one party does clearly violate the terms of the agreement, and allows the other party to threaten to enforce their patents if one party decides to release the other party’s secret.
(Comment by Ryan)
A strong claim...