Context: As part of our efforts of working on Open Questions the LessWrong team has been reaching out to various researchers we know and asked them about questions they would be interested in getting answered.
This question was given to us by Ryan Carey and some of the answers below are the result of us trying to answer the question for a day on a private LessWrong instance, copied over to allow other people to contribute and read what we wrote.
Here is a list of models of mine that overall paint the picture of “government has a lot of advantages over industry, enough that I expect government to be a lot better at keeping secrets”. I feel overall relatively confident in this conclusion, though in the absence of hard empirical data I probably wouldn’t go over 80% confidence.
Importance of obfuscation
The more I think about the difficulty of keeping secrets, the more the importance of active obfuscation seems clear to me. From what I understand in most military scenarios, it is rarely the case that the real planned strategy did not reach the enemy, instead the correct strategy was only one of many reports that the enemy received, and the enemy was often unable to distinguish the correct reports from the false ones.
Another example is the registration of patents. U.S. patents are public, but as I understand it many major companies often register dozens of fake patents to avoid others from being able to predict their next product. The difficulty is often in distinguishing the correct patent from all the fake and useless patents that a company registers.
The key difficulty of obfuscation is that lying is difficult. It is hard to manufacture some false statement about reality that isn’t contradicted by some more readily available fact about reality, though this depends on the domain of the secret. I think there are two major categories of secrets, “strategic secrets” and “external secrets”.
Strategic secrets are secrets about which strategy you are going to pursue. In this case, obfuscation is often easy, and bluffing is a very common occurrence. It’s rare that someone has easily verifiable facts about your psychology or decision making process that help them rule out all but one of your potential strategies, and this kind of secret is the basis of most adversarial games and their resulting strategies.
External secrets are facts about external reality that you want to prevent from becoming known by someone else. These are much harder to keep secret, and coming up with good obfuscations is often difficult since to confuse your enemy you need to create a hypothesis that is plausible, but not easily falsifiable. Depending on the domain, this might be quite difficult.
An example of the difficulty of obfuscation or deception that I am familiar with is the manufacturing of fake videos for speedrunning purposes. The speedrunning community has repeated experience with people trying to splice together video segments of a speed run into a run of the full game, or using emulators to make perfectly timed inputs at the correct moments. However, new ways to verify the veracity of a speedrun are constantly being developed including audio analysis to notice sharp transitions that would not occur in natural video footage, or analyses of the input limitations of standard controllers. Since new ways of identifying fake speedruns are constantly being developed, someone who is trying to fake a video will find it almost impossible to account for all of these and produce a video that will hold up as false for a long period of time.
The difficulty of preventing information leaks
My current model of cybersecurity suggests that it is almost impossible to keep any information stored on any computer with public internet access private, and that even for air-gapped computers keeping information on them secure is still extremely difficult.
My current model is that in cybersecurity, offense is vastly easier than defense and that you can likely acquire the relevant vulnerabilities to breach any specific machine for less than $10 million (based on the cost of a zero-day exploit for most operating systems being around $1 million dollars). The primary obstacle to cybersecurity is likely other people knowing precisely what information they are looking for, and identifying the precise targets that should be breached.
Even air-gapped computers are not safe from attacks, as the Stuxnet attack demonstrated, which compromised multiple nuclear power plants in Iran even though the plants were completely cut off from the internet.
My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.
Cost of verification:
Another major determinant of the difficulty of keeping a secret is the cost for others to verify that it is indeed true. This is particularly important in the context of active obfuscation. Here are some concrete examples:
You have a spy-plane with a maximum altitude of 10km that you want to keep secret. An enemy nation state receives that information, but also received false reports that you intentionally dissiminated that your maximum altitude is actually 15km, 8km or 5km. The ability to make use of your secret is now determined by the enemies ability to differentiate the correct hypothesis from the fictional ones. They basically have two ways of achieving this:
1. They have some ability knowledge of your strategy or psychology that allows them to infer which report is the correct one. E.g. if one of the reports was extracted from a highly guarded facility, and the other ones were suspiciously sent in via email from anonymous sources.
2. They can perform some experiment or some inference that allows them to distinguish between the competing hypotheses. The cost of this can differ a lot between different secrets. If they have access to a file of yours that is encoded with a cryptographic key, the cost of verifying that a single key is the correct one is probably neglegible, so finding the correct one from 5 keys is likely trivial.
In the spy-plane example, your enemy might be able to leverage their models of your manufacturing process to rule out some hypotheses (maybe because they think it’s currently impossible to build a spy-plane with a maximum altitude of 15 km). Or they might be able to combine some other knowledge they have about your plane, like the shape of its wings, to rule out certain numbers. But they will almost certainly struggle a lot more with verifying or falsifying one of your maximum altitude numbers, than they will with verifying the correct cryptographic key for a file of yours they stole.
Power theory:
The ability of an actor to prevent the release of a secret is primarily determined by the negative consequences they can inflict on someone else if they catch you trying to release the secret and the incentive the other actor has to get access to the secret.
In this model, governments have a variety of significant advantages over corporations. In particular, the government has access to guns and the ability to threaten violence, and the threat of serious prison time if you release governmental secrets.
Corporations usually only have access to civil litigation, and while this provides some protection and likely prevents whole companies in your country to spring into existence that have no purpose but to steal your technology, but still leaves a lot of room for international competitors to steal your secrets with little fear of consequences, and allows companies to take strategic risks for industrial espionage by just budgeting resources for the consequences of potential litigation.
In some situations corporations have more power over each other, which presumably allows them to keep better secrets. If you have a highly integrated industry in which many companies rely on each other’s products, then the threat of severing those ties might limit adversarial action like releasing secrets. An example here might be the graphics-card manufacturing industry, which tends to only have two big players (Intel and AMD) and many industry actors rely on good relations to one of these companies to function. As such, they are less likely to steal and use secrets from AMD or Intel, for fear of retribution from those organizations.
However, it might be the case that because of the delicacy of international relations, the ability for one country to punish another country for releasing secrets might actually be more limited than the ability for one international company to punish another international company. An example might be two international companies that compete in the same market, that have coordinated on pricing or splitting markets, and could renege on those agreements if they notice a defection by the other.
Concrete example: Mutually assured destruction via patents.
I have the cashed model that many of the worlds biggest software companies are well aware that they are constantly infringing on each others patents, and that they coordinate around that by agreeing to not pursue the violation of those patents.
This also enables a pretty natural way of punishment in case one party does clearly violate the terms of the agreement, and allows the other party to threaten to enforce their patents if one party decides to release the other party’s secret.
(Comment by Ryan)
A strong claim...
My naive expectation is that government has been more successful. This expectation rests on three things:
1. Industry is only interested in commercially relevant secrets. Government is interested in commercially relevant secrets, and also a variety of non-commercial secrets like those with military applications. Therefore a government is more likely to try to keep any random technological secret than a company will, because many of them are not commercially viable.
2. Historically, powerful technological secrets have been developed explicitly under government authority. In the United States example, these have been government laboratories or heavily regulated companies who yield the secrets to the government and don’t share them with the industry. Comparatively few such secrets are developed under the auspices of the private sector alone (unless they have been much more successful in keeping them secret than I expect).
3. Governments usually have capabilities that industries lack, like powers of investigation and violence. They can and do routinely use these capabilities in the protection of secrets. It is rare for a commercial entity to have anything like that capacity, and even if they do there is no presumption of legitimacy the way there is for governments.
So the government is interested in more kinds of powerful technological secrets, and originates most of them, while having and using additional tools for keeping them secret.
Following on assumption #1, it feels worth it to address the question of incentives. For example, a corporation only has a positive incentive to invest in security relative to the profits they expect from the secret or secrets in question. Further, they always have an incentive to cut costs and security is notorious for being a target because its relationship to profits is poorly understood, and that is how the judgments are made.
By contrast, the government tends to have security protocols first and then decide what to protect with them later. The United States is notorious for classifying huge amounts of even mundane information; security protocols last for longer than the average company exists (~25 years is common, frequently longer). There is a trend to overprotect secrets, regardless of power.
Because these incentives are different, it might be worthwhile to break the question up along a few different criteria. For example, suppose we compared government protection of important military secrets with something of similar import to a corporation, like trade secrets of their core product. Alternatively, we could break the question down by method and ask how each group has secured their technological secrets, and then compare between methods. This wouldn’t address the question of “is the risk greater if it is Google or DARPA who cracks AGI first” but it would help us more accurately assess such risks and perhaps help with safety-related recommendations.
The grandfather of a friend in college had a technique for producing irreplaceably smooth ball bearings that he preferred to keep secret rather than patent. He had two employees, one of which was his son in law.
I suspect there are a lot of small cases like this, because it would be weird for me to know the only one.