Not to argue on any specific points yet, but I think the main difference in approach between you and me is that mine is a lot more based on past empirical data, than theoretical speculation of the best way to do things.
I agree that a theoretical ideal opsec will cover multiple books. It is difficult to educate someone on that quickly. A shorter guide is actually better to not overwhelm someone with info. That being said, I’d highly encourage you if you want to write it.
This was a footnote, but I think I should actually move it up top.
Both you and I are distorting the landscape with this whole conversation.
Most leaks are humdrum, day-to-day, below-the-fold or inside-pages “an anonymous source in department X informs us that...” cases. Often the information involved isn’t classified. If it is, it’s not a big enough deal to put unlimited resources into it.
Those cases don’t get the level of investigation we’re talking about, with all the stops pulled out. Journalists are able to protect those sources.
But those cases add up and can have impact over time.
Honestly a guide for those cases might be more useful than a guide that assumes you’re going to be so hot you have to flee the country. But even those cases are complicated.
And you may guess wrong, in either direction, about how hot your disclosure will be.
Not to argue on any specific points yet, but I think the main difference in approach between you and me is that mine is a lot more based on past empirical data, than theoretical speculation of the best way to do things.
I’m going to have to dispute this.
First, a handful of cases may be “empirical”, but it’s misleading to call them “data”. One reason I reacted to what you posted was that it was so full of “theory” derived from relatively narrow and shallow information.
Second, I watched all of those cases in real time, and have also watched a lot of relevant stuff that wasn’t in the news, or at least wasn’t on the front page, because it wasn’t high-stakes “whistleblowing”. There are impactful leaks that aren’t at the level of the Snowden drop. Beyond that, tons of relevant things play out every day in non-leak-related contexts.
No, I haven’t gone through and systematically analyzed everything I could find in a single process. But I don’t think you have either. How did you identify the cases you thought about? Just off the top of my head, where’s Mark Klein[1]? Where’s Deep Throat[2]? Did you use a systematic and relatively unbiased method of finding the “data” you’re relying on?
What I take from the [anec]data is that:
Nobody so far has taken what I’d think of as decent OPSEC measures in the kind of very high-stakes, headline-grabbing, usually-clearly-illegal whistleblowing that makes you a truly major target[3].
We therefore don’t know anything about what would happen to anybody who did. You may indeed get caught even with the best feasible level of OPSEC, but we have no experience with that case. We know the risk is real, but have no defensible way to quantify it.
The reason we don’t see cases with good OPSEC may be that really good OPSEC is so constraining that it keeps leaks from happening at all[4].
If you make a truly high-profile leak under your own name, or if you get de-anonymized, your life will definitely be turned upside down, probably including prison time. If you flee the country or whatever, you will still not have anything resembling a normal life. It’s not obvious to me that there’s a lot of difference in consequences between deliberately disclosing your identity and having it found by investigation.
At the same time, doing it under your own name makes you more credible.
You seem to have arrived at (4) and maybe (5), too, but I’m not convinced that the measures you suggest for reducing the disruption are going to have predictable positive effects. What happened to Julian Assange may or may not have been better than prison[5]. Snowden did better than prison, I think, but not great[6].
I don’t actually think it’s completely futile to try to do this stuff with lasting anonymity, whereas I think you may believe that it usually is. But to do that, you have to be a certain kind of person, with a certain kind of knowledge, and a certain mindset. If you’re not that person, you may have to become that person first. Which, by the way, can make you suspicious in itself.
On the other hand, doing it openly also requires being a certain kind of person. Instead of the ability to do OPSEC, you need the ability to handle blowing up your life, and possibly the lives of people who depend on you. Also not easy to become.
I agree that a theoretical ideal opsec will cover multiple books. It is difficult to educate someone on that quickly. A shorter guide is actually better to not overwhelm someone with info.
If they’re overwhelmed, they will fail. If they rely on a short guide, they will also fail.
I believe that step-by-step recipes are actively dangerous[7]. They create false expecations, giving people the idea that they know what they need to do, without equipping them to understand the limitations of the approach, or notice when it need to change to match individual, unforeseeable circumstances.
If you give people counsel about risks, you need to make them understand those risks, and exactly how and how much your guidance changes those risks. Short guides can’t even do that much, let alone do much beyond the most obvious things to mitigate those risks.
It might actually be safer to force people to think for themselves. Even the sort of OPSEC that keeps you from getting caught before you leak takes a certain mindset.
For every action you take, you have to think “Can this be observed? Recorded? How can its effects be seen? What can be inferred from it? What does it look like? Why would people (or nowadays computers) think I might be doing it? How will they react? Might it make me interesting? What other information can be connected with it? Who has that information? Will it be noticed immediately? Will an after-the-fact investigation find it? Is there a better way to get the same result?”.
Once you’ve internalized those questions, answering them takes a lot of factual knowledge, and the particular knowledge you need is specific to the environment you’re working in. But at least if you’re asking the questions, you have a fighting chance of identifying all of the knowledge.
I sometimes give individual advice on things like Tor. I might as well have a keyboard macro for “If the stakes are high (which I don’t know because there is no reason for me to know why you want to do this), then don’t try this until you understand <long-list> at a much deeper level, or you will fail”. But rarely do I need to explain everything about <long-list> itself. There’s usually a manual. If they can’t understand the manual once they’re told they need to, they won’t understand what I say, either.
That being said, I’d highly encourage you if you want to write it.
I could maybe write a study outline: a checklist of things you might need to understand. I’m not volunteering, just saying I’d be capable of it and it might be useful. If I wrote a comprehensive book (or series of books), you’ve correctly pointed out that nobody would read it. It’d also have to be maintained, or it’d have dangerous inaccuracies scarily soon. Things change all the time.
Klein wasn’t an insider. They(TM) probably didn’t have a sense of having be betrayed (which matters a lot to the type of people who tend to get into military and intelligence jobs, but I’m not so sure about AI companies). They(TM) also apparently didn’t feel they had a good legal way to go after Klein. But, still, he did make a high-impact disclosure, under his own name, and skate. He might not have skated under a more vindictive administration, though. Some people, including more than one person we’re probably both thinking of right now, feel betrayed as a default state, and are willing to upend people’s lives without any real legal basis for it. And that can seep into organizational culture.
Deep Throat stayed anonyous, even though Woodward and Bernstein knew who he was. The technological, cultural, and even legal landscape may have changed enough that Deep Throat isn’t on point, but you don’t say why you didn’t include him.
Which may be why Snowden, who seemed to have a realistic inside view, didn’t really try to remain anonymous post-leak, and did in fact just flee the US (in which he very nearly failed). Or maybe he did it because he knew he’d be more credible with his name on the leak. But that should still be understood as him deliberately sacrificing himself.
It turned into a big pain in the butt for the government of Ecuador, too. I wonder if other governments may not have watched and seen it as a lesson not to take in foundlings like that.
… at least unless they’re so tightly focused on a single set of circumstances that they nearly have to be written for a specific individual. I might be able to write you a somewhat digestible recipe for setting up X type of Web site as a Tor hidden service. At least if it were a simple kind of site. I’d feel better about it if the stakes weren’t too high. I could not write a reasonably digestible recipe for anonymously exfiltrating and disclosing just any data from just any highly paranoid organization.
I actually think we’re on the same page about a lot of what you’re saying.
But those cases add up and can have impact over time.
I agree with this. It is valuable for me to write a guide on those cases too. If I have to write only one guide I’ll write the one I’m writing now. But yes both are valuable.
No, I haven’t gone through and systematically analyzed everything I could find in a single process. But I don’t think you have either. How did you identify the cases you thought about? Just off the top of my head, where’s Mark Klein[1]? Where’s Deep Throat[2]? Did you use a systematic and relatively unbiased method of finding the “data” you’re relying on?
I have some notes on both Mark Klein and Deep Throat, haven’t published. I do plan to publish a proper database of atleast the top 20 or so leaks (and ideally a lot more) with fact-checked information.
Short version is Mark Klein did not release classified information, and Mark Felt (Deep Throat) operated over 50 years ago in a different technological and legal environment. The NSA did not have same level of technical capabilities, nor had it bureaucratically hacked the FBI back then.
I don’t actually think it’s completely futile to try to do this stuff with lasting anonymity, whereas I think you may believe that it usually is. But to do that, you have to be a certain kind of person, with a certain kind of knowledge, and a certain mindset. If you’re not that person, you may have to become that person first. Which, by the way, can make you suspicious in itself.
On the other hand, doing it openly also requires being a certain kind of person. Instead of the ability to do OPSEC, you need the ability to handle blowing up your life, and possibly the lives of people who depend on you. Also not easy to become.
The main reason I’m pessimistic about anonymity in this case is the extent to which the NSA tracks who downloads every document. The list of suspects is small enough that everyone can be investigated, which means your best bet might be to attempt to falsely incriminate someone else in that pool. Apart from that having questionable morals, it’s also really hard to do for someone who lacks experience in this type of work.
If you are giving up on anonymity, I agree you have to be able to handle blowing up your life in many respects. I’ve recently been reading up on the possibility of taking spouse or children with you when you whistleblow. I do think this guide needs more resources on mental health.
If you give people counsel about risks, you need to make them understand those risks, and exactly how and how much your guidance changes those risks. Short guides can’t even do that much, let alone do much beyond the most obvious things to mitigate those risks.
It might actually be safer to force people to think for themselves. Even the sort of OPSEC that keeps you from getting caught before you leak takes a certain mindset.
This is valuable input, thanks for sharing this. I do understand opsec is a mindset not just a set of rules.
Someone who has the mindset still needs a clear set of rules though. And I think it’s better for a team of experts from the outside to write that set of rules, instead of the person themselves. They can make an attempt to adapt the rules to their circumstances.
I think it would be extremely valuable if there were fast ways to transmit this mindset to someone who does not already have the mindset. I’d love more input on how to do this. I will also think more about it myself. Most people at OpenAI/Deepmind/Anthropic are software developers and hence teaching them the mindset can be done more quickly, assuming they dont have it already.
They create false expecations, giving people the idea that they know what they need to do, without equipping them to understand the limitations of the approach, or notice when it need to change to match individual, unforeseeable circumstances.
I think the ethics of this come down to ultimately what the probability of success is. I think if you’re going the Snowden route of fleeing the country, you can afford to make some opsec mistakes and still escape with your life. The bar is nowhere as high as it would be, if I were writing a guide for someone trying to stay anonymous within the US. My guide explicitly says you will be doxxed eventually, it’s just a question of how soon.
There’s usually a manual. If they can’t understand the manual once they’re told they need to, they won’t understand what I say, either.
I have seen some of these manuals on dark web. You can send me anonymous email at samuel.da.shadrach@gmail.com if you want to send me any. (I don’t operate an anonymous email myself for this circumstance, I think it could give me false sense of security if I create one and then post it here under my real name.)
The main reason I don’t want to tell people to just refer to a manual is none of the manuals are tailor made to this circumstance, they’re usually written by someone with a different circumstance in mind, such as a drug vendor or cyberhacker or whatever.
And a whistleblower has limited time and energy, they can’t go through 10 different manuals and carefully pick and choose what applies to their situation. They need a clear set of rules ready-to-go IMO.
It’d also have to be maintained, or it’d have dangerous inaccuracies scarily soon. Things change all the time.
Agree. My platonic ideal is atleast once a month, all the experts involved in writing the guide go through and approve any changes to it.
Short version is Mark Klein did not release classified information,
Right. And I think at least part of the reason he got away with it was that particular people (and courts) at the time felt constrained by the rule of law. This may not always apply.
The NSA did not have same level of technical capabilities, nor had it bureaucratically hacked the FBI back then.
I’m going to respond to this at more length, but I’ve moved most of it down because I think it’s a lower priority. The main point I want to make is that there’s a misleading attitude out there about the NSA, and letting your thoughts be all “NSA” may not be healthy.
I agree that everybody who’s watching will cooperate on something like what we’re talking about here.
The list of suspects is small enough that everyone can be investigated,
That depends on circumstances, though. Sometimes yes, sometimes no. Some things you might want to leak may have fairly wide internal distribution. And sometimes there’s a way to get something outside of the normal channels. And sometimes they really don’t want to risk going after the wrong person, so those investigations need to be more airtight.
But surely not always.
which means your best bet might be to attempt to falsely incriminate someone else in that pool. Apart from that having questionable morals, it’s also really hard to do for someone who lacks experience in this type of work.
Lots of people get caught trying to do that.
Someone who has the mindset still needs a clear set of rules though. And I think it’s better for a team of experts from the outside to write that set of rules, instead of the person themselves. They can make an attempt to adapt the rules to their circumstances.
Well, what kind of rules do you want?
That litany of questions I posted could be considered as a checklist. But I’m not sure that asking them equips you to recognize when you have or haven’t actually answered them.
I might add something about thinking about the other side’s constraints and available moves, and about not being comfortable that you understand something until you’ve confirmed how it works at the “gears level”. That’s where all the factual knowledge comes in.
I think it would be extremely valuable if there were fast ways to transmit this mindset to someone who does not already have the mindset.
Other than stuff like those questions, I don’t know a way. I think it’s partly about seeing these sorts of games played a lot of ways, and partly about the innate ability to think through all the possibilities with the right kind of paranoia. Paranoia that’s skeptical of itself.
In some sense it’s like the skill of finding all the cases in a mathematical proof, except that in practice you also have to be able to decide you’ll live with a “counterexample” if it’s improbable enough. Which, of course, means judging how improbable it actually is. It’s easy to fall into error there, in either direction.
… and of course this is all “theoretical” on my part in that I’ve never actually done it when anybody was really going to go after me.
I think the ethics of this come down to ultimately what the probability of success is.
… and to whether the person at risk knows that probability...
Most of that stuff is drivel. It tends to be a total mishmash of truths, half-truths, unsupported folklore, misunderstandings, outright lies, and regurgitated disinformation, collected by people with no coherent internal model of How Things Work to measure any of it against. Then it gets filtered through whatever conspiracy theories the writers use to feed their addiction to feeling like they know the Hidden Truth(TM), with incompatible evidence discarded.
Trying to dig real information out of that material is at best horribly frustrating, and it’s terrible for helping you get a coherent model, especially a coherent accurate model. At most it might give you some search terms, and even then you need to be super skeptical of what you find.
And not any “howtos”.
I mean the actual manuals that actually explain how the systems involved work.
For technical information, that means not something entitled “How to HaxOr on the Dark Web lol”, but the manual for the OS. The manuals for servers and monitoring systems. The protocol specifications. Security product literature. Technical security standards. Architecture textbooks. There’s no royal road; you have to actually understand how things work.
Figuring out how organizations and institutions work is harder than technology, because they vary, they have fewer relevant physical or mathematical constraints, they don’t tend to have public documentation, and they don’t always follow their own internal “rules”. You can piece some of it together, and for the rest at least you can often figure out what you don’t know. You can look at court documents, official procedures, security management standards, even amusing personal anecdotes about bureaucracy. You can infer some of it from the features people have seen fit to put into technology that supports the organizations. But you won’t get it from something called the “The Truth about the NSA(TM)”. If anything there are more silly conspiracy theories about institutions than about technology.
And a whistleblower has limited time and energy, they can’t go through 10 different manuals and carefully pick and choose what applies to their situation. They need a clear set of rules ready-to-go IMO.
This may be the crux of the issue. I would tend to say that if you’re not already pretty sophisticated, maybe you shouldn’t do it. Not just because you don’t know your risks, not just because you may be unprepared either to stay anonymous or to deal with the consequences of not being… but also because you may not understand the actual impact, or lack thereof, that your disclosure will have when it hits the real world.
Agree. My platonic ideal is atleast once a month, all the experts involved in writing the guide go through and approve any changes to it.
Sure, but it’s super hard to actually do even a far less perfect version of that.
Surveillance and “the NSA”
A couple of reordered quotes--
The main reason I’m pessimistic about anonymity in this case is the extent to which the NSA tracks who downloads every document.
I think we’ll probably agree that, at least in your main scenario, it doesn’t actually matter much exactly who watches what. You can expect them all to be cooperating, and it may not matter much in the end. But I think it may matter enough that it pays to keep a clear model of them. And for the sake of that...
It’s not just the NSA spying on everything, nor the NSA and the FBI, nor any fixed set of people. Also, the NSA qua NSA isn’t as omniscient as some people think. And information sharing isn’t about “hacking”, even bureaucratic hacking. If you think of all the people who are watching in terms of a monolithic, all-knowing “NSA”, you may make mistakes.
Basically every modern institution, even non-secretive ones, logs internal downloads (and external network connections, and more). You can end up doing that without even trying, just because of the software defaults. I believe the NSA does not usually have access to such internal logs. But, again, for this case, they’re on the same side as the people who do.
The people who matter most in terms of noticing your initial acquisition of whatever information will be your organization’s internal security and counterintelligence functions.
Where you get into trouble with the NSA is out on the public Internet. After you exfiltrate your data, the NSA is the agency that may be able to figure out where they went, or link your activities outside of the organization you’re whistleblowing on with your activities inside of it.
The [Watergate—jbash] NSA did not have same level of technical capabilities, nor had it bureaucratically hacked the FBI back then.
I think the technological change is really the key point. Back then, you couldn’t keep a log of every download; there were no downloads. If somebody photocopied something, that would at most show up on a total copies counter on one of any number of machines. And there weren’t likely to be permanently recorded cameras in that parking garage (which would mostly likely belong to the garage, not the NSA).
As for organizations, the FBI at the time was the J. Edgar Hoover FBI. I’m not sure anybody had to hack it to weaponize it. And the NSA was still the totally secret No Such Agency, and was almost entirely focused on foreign COMINT (it’s still basically about COMINT, and theoretically foreign).
I’m not sure what “bureaucratically hacked” means. I don’t believe the NSA has substantially infiltrated the FBI in some undercover way. A lot of interagency barriers went away with the (stupid) USA-PATRIOT act and the surrounding restructurings. But those changes weren’t “hacking”. They were openly imposed by official policy from above and outside both agencies (even though they were probably welcomed on both sides).
And I think at least part of the reason he got away with it was that particular people (and courts) at the time felt constrained by the rule of law. This may not always apply.
There is a more recent list of examples of people who also got away with publicly leaking summaries but not leaking classified information. Your point is also true, but I think classified information being leaked is a bigger factor. It’ll be easier for me to argue all this once I publish an actual list of case studies.
Sorry, I need more time for that.
For technical information, that means not something entitled “How to HaxOr on the Dark Web lol”, but the manual for the OS. The manuals for servers and monitoring systems. The protocol specifications. Security product literature. Technical security standards. Architecture textbooks. There’s no royal road; you have to actually understand how things work.
Yes I agree this is required for the multiple books opsec guide, not the 10-page quick guide. Again, the question comes back to probabilities. These are very much not the actual numbers but if P(no guide, no prison) = 50%, P(10-page guide, no prison) = 65%, P(1000-page guide, no prison) = 80%, then it is worth publishing a 10-page guide.
I would tend to say that if you’re not already pretty sophisticated, maybe you shouldn’t do it. Not just because you don’t know your risks, not just because you may be unprepared either to stay anonymous or to deal with the consequences of not being… but also because you may not understand the actual impact, or lack thereof, that your disclosure will have when it hits the real world.
You can have wide uncertainty on every single one of these questions and still conclude that whistleblowing is the correct decision. See the example probabilities above.
I agree that my guide should include a section on predicting potential outcomes after you whistleblow. I will definitely do this. Thank you for the suggestion.
Basically every modern institution, even non-secretive ones, logs internal downloads (and external network connections, and more). You can end up doing that without even trying, just because of the software defaults. I believe the NSA does not usually have access to such internal logs. But, again, for this case, they’re on the same side as the people who do.
The people who matter most in terms of noticing your initial acquisition of whatever information will be your organization’s internal security and counterintelligence functions.
Where you get into trouble with the NSA is out on the public Internet. After you exfiltrate your data, the NSA is the agency that may be able to figure out where they went, or link your activities outside of the organization you’re whistleblowing on with your activities inside of it.
I agree this model is valuable to have, and I appreciate you writing this up.
I honestly do think though that by 2027, there is a good chance internal security for AI companies will directly report to the head of the NSA and ultimately the president, not to the head of labs.
And not just in a legal abstract sense like how Lockheed Martin’s various security teams may be accountable to the govt, but in the sense of how Manhattan project security directly reported to US military generals who frequently visited the sites for inspections.
I would not be surprised if NSA leaders built war rooms and private residences inside the datacenter compound, although this might be me speculating too far. If it happens though it may take longer than 2027, maybe 2029? It depends on timelines honestly.
Other than stuff like those questions, I don’t know a way. I think it’s partly about seeing these sorts of games played a lot of ways, and partly about the innate ability to think through all the possibilities with the right kind of paranoia. Paranoia that’s skeptical of itself.
I’ll try to think of a way though. It seems valuable to do.
I think the technological change is really the key point
Agree!
They were openly imposed by official policy from above and outside both agencies (even though they were probably welcomed on both sides).
Snowden in Permanent Record considers Dick Cheney, vice president and Michael Hayden, NSA director co-conspirators. I don’t have a lot of data to argue this but I do think the FBI director was intentionally kept out of the matter.
Your description is not wrong, there’s nuances here I haven’t tried to fully understand yet either. Thanks for the pointer that this may be worth doing.
Not to argue on any specific points yet, but I think the main difference in approach between you and me is that mine is a lot more based on past empirical data, than theoretical speculation of the best way to do things.
I agree that a theoretical ideal opsec will cover multiple books. It is difficult to educate someone on that quickly. A shorter guide is actually better to not overwhelm someone with info. That being said, I’d highly encourage you if you want to write it.
This was a footnote, but I think I should actually move it up top.
Both you and I are distorting the landscape with this whole conversation.
Most leaks are humdrum, day-to-day, below-the-fold or inside-pages “an anonymous source in department X informs us that...” cases. Often the information involved isn’t classified. If it is, it’s not a big enough deal to put unlimited resources into it.
Those cases don’t get the level of investigation we’re talking about, with all the stops pulled out. Journalists are able to protect those sources.
But those cases add up and can have impact over time.
Honestly a guide for those cases might be more useful than a guide that assumes you’re going to be so hot you have to flee the country. But even those cases are complicated.
And you may guess wrong, in either direction, about how hot your disclosure will be.
I’m going to have to dispute this.
First, a handful of cases may be “empirical”, but it’s misleading to call them “data”. One reason I reacted to what you posted was that it was so full of “theory” derived from relatively narrow and shallow information.
Second, I watched all of those cases in real time, and have also watched a lot of relevant stuff that wasn’t in the news, or at least wasn’t on the front page, because it wasn’t high-stakes “whistleblowing”. There are impactful leaks that aren’t at the level of the Snowden drop. Beyond that, tons of relevant things play out every day in non-leak-related contexts.
No, I haven’t gone through and systematically analyzed everything I could find in a single process. But I don’t think you have either. How did you identify the cases you thought about? Just off the top of my head, where’s Mark Klein[1]? Where’s Deep Throat[2]? Did you use a systematic and relatively unbiased method of finding the “data” you’re relying on?
What I take from the [anec]data is that:
Nobody so far has taken what I’d think of as decent OPSEC measures in the kind of very high-stakes, headline-grabbing, usually-clearly-illegal whistleblowing that makes you a truly major target[3].
We therefore don’t know anything about what would happen to anybody who did. You may indeed get caught even with the best feasible level of OPSEC, but we have no experience with that case. We know the risk is real, but have no defensible way to quantify it.
The reason we don’t see cases with good OPSEC may be that really good OPSEC is so constraining that it keeps leaks from happening at all[4].
If you make a truly high-profile leak under your own name, or if you get de-anonymized, your life will definitely be turned upside down, probably including prison time. If you flee the country or whatever, you will still not have anything resembling a normal life. It’s not obvious to me that there’s a lot of difference in consequences between deliberately disclosing your identity and having it found by investigation.
At the same time, doing it under your own name makes you more credible.
You seem to have arrived at (4) and maybe (5), too, but I’m not convinced that the measures you suggest for reducing the disruption are going to have predictable positive effects. What happened to Julian Assange may or may not have been better than prison[5]. Snowden did better than prison, I think, but not great[6].
I don’t actually think it’s completely futile to try to do this stuff with lasting anonymity, whereas I think you may believe that it usually is. But to do that, you have to be a certain kind of person, with a certain kind of knowledge, and a certain mindset. If you’re not that person, you may have to become that person first. Which, by the way, can make you suspicious in itself.
On the other hand, doing it openly also requires being a certain kind of person. Instead of the ability to do OPSEC, you need the ability to handle blowing up your life, and possibly the lives of people who depend on you. Also not easy to become.
If they’re overwhelmed, they will fail. If they rely on a short guide, they will also fail.
I believe that step-by-step recipes are actively dangerous[7]. They create false expecations, giving people the idea that they know what they need to do, without equipping them to understand the limitations of the approach, or notice when it need to change to match individual, unforeseeable circumstances.
If you give people counsel about risks, you need to make them understand those risks, and exactly how and how much your guidance changes those risks. Short guides can’t even do that much, let alone do much beyond the most obvious things to mitigate those risks.
It might actually be safer to force people to think for themselves. Even the sort of OPSEC that keeps you from getting caught before you leak takes a certain mindset.
For every action you take, you have to think “Can this be observed? Recorded? How can its effects be seen? What can be inferred from it? What does it look like? Why would people (or nowadays computers) think I might be doing it? How will they react? Might it make me interesting? What other information can be connected with it? Who has that information? Will it be noticed immediately? Will an after-the-fact investigation find it? Is there a better way to get the same result?”.
Once you’ve internalized those questions, answering them takes a lot of factual knowledge, and the particular knowledge you need is specific to the environment you’re working in. But at least if you’re asking the questions, you have a fighting chance of identifying all of the knowledge.
I sometimes give individual advice on things like Tor. I might as well have a keyboard macro for “If the stakes are high (which I don’t know because there is no reason for me to know why you want to do this), then don’t try this until you understand <long-list> at a much deeper level, or you will fail”. But rarely do I need to explain everything about <long-list> itself. There’s usually a manual. If they can’t understand the manual once they’re told they need to, they won’t understand what I say, either.
I could maybe write a study outline: a checklist of things you might need to understand. I’m not volunteering, just saying I’d be capable of it and it might be useful. If I wrote a comprehensive book (or series of books), you’ve correctly pointed out that nobody would read it. It’d also have to be maintained, or it’d have dangerous inaccuracies scarily soon. Things change all the time.
Klein wasn’t an insider. They(TM) probably didn’t have a sense of having be betrayed (which matters a lot to the type of people who tend to get into military and intelligence jobs, but I’m not so sure about AI companies). They(TM) also apparently didn’t feel they had a good legal way to go after Klein. But, still, he did make a high-impact disclosure, under his own name, and skate. He might not have skated under a more vindictive administration, though. Some people, including more than one person we’re probably both thinking of right now, feel betrayed as a default state, and are willing to upend people’s lives without any real legal basis for it. And that can seep into organizational culture.
Deep Throat stayed anonyous, even though Woodward and Bernstein knew who he was. The technological, cultural, and even legal landscape may have changed enough that Deep Throat isn’t on point, but you don’t say why you didn’t include him.
Ed Snowden probably came closest. Manning and Winner, on the other hand…
Which may be why Snowden, who seemed to have a realistic inside view, didn’t really try to remain anonymous post-leak, and did in fact just flee the US (in which he very nearly failed). Or maybe he did it because he knew he’d be more credible with his name on the leak. But that should still be understood as him deliberately sacrificing himself.
It turned into a big pain in the butt for the government of Ecuador, too. I wonder if other governments may not have watched and seen it as a lesson not to take in foundlings like that.
And, seriously, everything about Snowden’s flight and asylum was weird. It doesn’t look like a good case to be generalizing from.
… at least unless they’re so tightly focused on a single set of circumstances that they nearly have to be written for a specific individual. I might be able to write you a somewhat digestible recipe for setting up X type of Web site as a Tor hidden service. At least if it were a simple kind of site. I’d feel better about it if the stakes weren’t too high. I could not write a reasonably digestible recipe for anonymously exfiltrating and disclosing just any data from just any highly paranoid organization.
I love your reply.
I actually think we’re on the same page about a lot of what you’re saying.
I agree with this. It is valuable for me to write a guide on those cases too. If I have to write only one guide I’ll write the one I’m writing now. But yes both are valuable.
I have some notes on both Mark Klein and Deep Throat, haven’t published. I do plan to publish a proper database of atleast the top 20 or so leaks (and ideally a lot more) with fact-checked information.
Short version is Mark Klein did not release classified information, and Mark Felt (Deep Throat) operated over 50 years ago in a different technological and legal environment. The NSA did not have same level of technical capabilities, nor had it bureaucratically hacked the FBI back then.
The main reason I’m pessimistic about anonymity in this case is the extent to which the NSA tracks who downloads every document. The list of suspects is small enough that everyone can be investigated, which means your best bet might be to attempt to falsely incriminate someone else in that pool. Apart from that having questionable morals, it’s also really hard to do for someone who lacks experience in this type of work.
If you are giving up on anonymity, I agree you have to be able to handle blowing up your life in many respects. I’ve recently been reading up on the possibility of taking spouse or children with you when you whistleblow. I do think this guide needs more resources on mental health.
This is valuable input, thanks for sharing this. I do understand opsec is a mindset not just a set of rules.
Someone who has the mindset still needs a clear set of rules though. And I think it’s better for a team of experts from the outside to write that set of rules, instead of the person themselves. They can make an attempt to adapt the rules to their circumstances.
I think it would be extremely valuable if there were fast ways to transmit this mindset to someone who does not already have the mindset. I’d love more input on how to do this. I will also think more about it myself. Most people at OpenAI/Deepmind/Anthropic are software developers and hence teaching them the mindset can be done more quickly, assuming they dont have it already.
I think the ethics of this come down to ultimately what the probability of success is. I think if you’re going the Snowden route of fleeing the country, you can afford to make some opsec mistakes and still escape with your life. The bar is nowhere as high as it would be, if I were writing a guide for someone trying to stay anonymous within the US. My guide explicitly says you will be doxxed eventually, it’s just a question of how soon.
I have seen some of these manuals on dark web. You can send me anonymous email at samuel.da.shadrach@gmail.com if you want to send me any. (I don’t operate an anonymous email myself for this circumstance, I think it could give me false sense of security if I create one and then post it here under my real name.)
The main reason I don’t want to tell people to just refer to a manual is none of the manuals are tailor made to this circumstance, they’re usually written by someone with a different circumstance in mind, such as a drug vendor or cyberhacker or whatever.
And a whistleblower has limited time and energy, they can’t go through 10 different manuals and carefully pick and choose what applies to their situation. They need a clear set of rules ready-to-go IMO.
Agree. My platonic ideal is atleast once a month, all the experts involved in writing the guide go through and approve any changes to it.
Right. And I think at least part of the reason he got away with it was that particular people (and courts) at the time felt constrained by the rule of law. This may not always apply.
I’m going to respond to this at more length, but I’ve moved most of it down because I think it’s a lower priority. The main point I want to make is that there’s a misleading attitude out there about the NSA, and letting your thoughts be all “NSA” may not be healthy.
I agree that everybody who’s watching will cooperate on something like what we’re talking about here.
That depends on circumstances, though. Sometimes yes, sometimes no. Some things you might want to leak may have fairly wide internal distribution. And sometimes there’s a way to get something outside of the normal channels. And sometimes they really don’t want to risk going after the wrong person, so those investigations need to be more airtight.
But surely not always.
Lots of people get caught trying to do that.
Well, what kind of rules do you want?
That litany of questions I posted could be considered as a checklist. But I’m not sure that asking them equips you to recognize when you have or haven’t actually answered them.
I might add something about thinking about the other side’s constraints and available moves, and about not being comfortable that you understand something until you’ve confirmed how it works at the “gears level”. That’s where all the factual knowledge comes in.
Other than stuff like those questions, I don’t know a way. I think it’s partly about seeing these sorts of games played a lot of ways, and partly about the innate ability to think through all the possibilities with the right kind of paranoia. Paranoia that’s skeptical of itself.
In some sense it’s like the skill of finding all the cases in a mathematical proof, except that in practice you also have to be able to decide you’ll live with a “counterexample” if it’s improbable enough. Which, of course, means judging how improbable it actually is. It’s easy to fall into error there, in either direction.
… and of course this is all “theoretical” on my part in that I’ve never actually done it when anybody was really going to go after me.
… and to whether the person at risk knows that probability...
Egad, no, not those “manuals”[1]!
Most of that stuff is drivel. It tends to be a total mishmash of truths, half-truths, unsupported folklore, misunderstandings, outright lies, and regurgitated disinformation, collected by people with no coherent internal model of How Things Work to measure any of it against. Then it gets filtered through whatever conspiracy theories the writers use to feed their addiction to feeling like they know the Hidden Truth(TM), with incompatible evidence discarded.
Trying to dig real information out of that material is at best horribly frustrating, and it’s terrible for helping you get a coherent model, especially a coherent accurate model. At most it might give you some search terms, and even then you need to be super skeptical of what you find.
And not any “howtos”.
I mean the actual manuals that actually explain how the systems involved work.
For technical information, that means not something entitled “How to HaxOr on the Dark Web lol”, but the manual for the OS. The manuals for servers and monitoring systems. The protocol specifications. Security product literature. Technical security standards. Architecture textbooks. There’s no royal road; you have to actually understand how things work.
Figuring out how organizations and institutions work is harder than technology, because they vary, they have fewer relevant physical or mathematical constraints, they don’t tend to have public documentation, and they don’t always follow their own internal “rules”. You can piece some of it together, and for the rest at least you can often figure out what you don’t know. You can look at court documents, official procedures, security management standards, even amusing personal anecdotes about bureaucracy. You can infer some of it from the features people have seen fit to put into technology that supports the organizations. But you won’t get it from something called the “The Truth about the NSA(TM)”. If anything there are more silly conspiracy theories about institutions than about technology.
This may be the crux of the issue. I would tend to say that if you’re not already pretty sophisticated, maybe you shouldn’t do it. Not just because you don’t know your risks, not just because you may be unprepared either to stay anonymous or to deal with the consequences of not being… but also because you may not understand the actual impact, or lack thereof, that your disclosure will have when it hits the real world.
Sure, but it’s super hard to actually do even a far less perfect version of that.
Surveillance and “the NSA”
A couple of reordered quotes--
I think we’ll probably agree that, at least in your main scenario, it doesn’t actually matter much exactly who watches what. You can expect them all to be cooperating, and it may not matter much in the end. But I think it may matter enough that it pays to keep a clear model of them. And for the sake of that...
It’s not just the NSA spying on everything, nor the NSA and the FBI, nor any fixed set of people. Also, the NSA qua NSA isn’t as omniscient as some people think. And information sharing isn’t about “hacking”, even bureaucratic hacking. If you think of all the people who are watching in terms of a monolithic, all-knowing “NSA”, you may make mistakes.
Basically every modern institution, even non-secretive ones, logs internal downloads (and external network connections, and more). You can end up doing that without even trying, just because of the software defaults. I believe the NSA does not usually have access to such internal logs. But, again, for this case, they’re on the same side as the people who do.
The people who matter most in terms of noticing your initial acquisition of whatever information will be your organization’s internal security and counterintelligence functions.
Where you get into trouble with the NSA is out on the public Internet. After you exfiltrate your data, the NSA is the agency that may be able to figure out where they went, or link your activities outside of the organization you’re whistleblowing on with your activities inside of it.
I think the technological change is really the key point. Back then, you couldn’t keep a log of every download; there were no downloads. If somebody photocopied something, that would at most show up on a total copies counter on one of any number of machines. And there weren’t likely to be permanently recorded cameras in that parking garage (which would mostly likely belong to the garage, not the NSA).
As for organizations, the FBI at the time was the J. Edgar Hoover FBI. I’m not sure anybody had to hack it to weaponize it. And the NSA was still the totally secret No Such Agency, and was almost entirely focused on foreign COMINT (it’s still basically about COMINT, and theoretically foreign).
I’m not sure what “bureaucratically hacked” means. I don’t believe the NSA has substantially infiltrated the FBI in some undercover way. A lot of interagency barriers went away with the (stupid) USA-PATRIOT act and the surrounding restructurings. But those changes weren’t “hacking”. They were openly imposed by official policy from above and outside both agencies (even though they were probably welcomed on both sides).
And please don’t say “Dark Web”. It romanticizes a bunch of unconnected Web sites, many of them run by fools and cranks, into something they’re not.
There is a more recent list of examples of people who also got away with publicly leaking summaries but not leaking classified information. Your point is also true, but I think classified information being leaked is a bigger factor. It’ll be easier for me to argue all this once I publish an actual list of case studies.
Sorry, I need more time for that.
Yes I agree this is required for the multiple books opsec guide, not the 10-page quick guide. Again, the question comes back to probabilities. These are very much not the actual numbers but if P(no guide, no prison) = 50%, P(10-page guide, no prison) = 65%, P(1000-page guide, no prison) = 80%, then it is worth publishing a 10-page guide.
You can have wide uncertainty on every single one of these questions and still conclude that whistleblowing is the correct decision. See the example probabilities above.
I agree that my guide should include a section on predicting potential outcomes after you whistleblow. I will definitely do this. Thank you for the suggestion.
I agree this model is valuable to have, and I appreciate you writing this up.
I honestly do think though that by 2027, there is a good chance internal security for AI companies will directly report to the head of the NSA and ultimately the president, not to the head of labs.
And not just in a legal abstract sense like how Lockheed Martin’s various security teams may be accountable to the govt, but in the sense of how Manhattan project security directly reported to US military generals who frequently visited the sites for inspections.
I would not be surprised if NSA leaders built war rooms and private residences inside the datacenter compound, although this might be me speculating too far. If it happens though it may take longer than 2027, maybe 2029? It depends on timelines honestly.
I’ll try to think of a way though. It seems valuable to do.
Agree!
Snowden in Permanent Record considers Dick Cheney, vice president and Michael Hayden, NSA director co-conspirators. I don’t have a lot of data to argue this but I do think the FBI director was intentionally kept out of the matter.
Your description is not wrong, there’s nuances here I haven’t tried to fully understand yet either. Thanks for the pointer that this may be worth doing.