Security Mindset and the Logistic Success Curve

Fol­low-up to: Se­cu­rity Mind­set and Or­di­nary Para­noia


(Two days later, Am­ber re­turns with an­other ques­tion.)

am­ber: Uh, say, Co­ral. How im­por­tant is se­cu­rity mind­set when you’re build­ing a whole new kind of sys­tem—say, one sub­ject to po­ten­tially ad­verse op­ti­miza­tion pres­sures, where you want it to have some sort of ro­bust­ness prop­erty?

coral: How novel is the sys­tem?

am­ber: Very novel.

coral: Novel enough that you’d have to in­vent your own new best prac­tices in­stead of look­ing them up?

am­ber: Right.

coral: That’s se­ri­ous busi­ness. If you’re build­ing a very sim­ple In­ter­net-con­nected sys­tem, maybe a smart or­di­nary para­noid could look up how we usu­ally guard against ad­ver­saries, use as much off-the-shelf soft­ware as pos­si­ble that was checked over by real se­cu­rity pro­fes­sion­als, and not do too hor­ribly. But if you’re do­ing some­thing qual­i­ta­tively new and com­pli­cated that has to be ro­bust against ad­verse op­ti­miza­tion, well… mostly I’d think you were op­er­at­ing in al­most im­pos­si­bly dan­ger­ous ter­ri­tory, and I’d ad­vise you to figure out what to do af­ter your first try failed. But if you wanted to ac­tu­ally suc­ceed, or­di­nary para­noia ab­solutely would not do it.

am­ber: In other words, pro­jects to build novel mis­sion-crit­i­cal sys­tems ought to have ad­vi­sors with the full se­cu­rity mind­set, so that the ad­vi­sor can say what the sys­tem builders re­ally need to do to en­sure se­cu­rity.

coral: (laughs sadly) No.

am­ber: No?

coral: Let’s say for the sake of con­crete­ness that you want to build a new kind of se­cure op­er­at­ing sys­tem. That is not the sort of thing you can do by at­tach­ing one ad­vi­sor with se­cu­rity mind­set, who has limited poli­ti­cal cap­i­tal to use to try to ar­gue peo­ple into do­ing things. “Build­ing a house when you’re only al­lowed to touch the bricks us­ing tweez­ers” comes to mind as a metaphor. You’re go­ing to need ex­pe­rienced se­cu­rity pro­fes­sion­als work­ing full-time with high au­thor­ity. Three of them, one of whom is a cofounder. Although even then, we might still be op­er­at­ing in the ter­ri­tory of Paul Gra­ham’s De­sign Para­dox.

am­ber: De­sign Para­dox? What’s that?

coral: Paul Gra­ham’s De­sign Para­dox is that peo­ple who have good taste in UIs can tell when other peo­ple are de­sign­ing good UIs, but most CEOs of big com­pa­nies lack the good taste to tell who else has good taste. And that’s why big com­pa­nies can’t just hire other peo­ple as tal­ented as Steve Jobs to build nice things for them, even though Steve Jobs cer­tainly wasn’t the best pos­si­ble de­signer on the planet. Ap­ple ex­isted be­cause of a lucky his­tory where Steve Jobs ended up in charge. There’s no way for Sam­sung to hire some­body else with equal tal­ents, be­cause Sam­sung would just end up with some guy in a suit who was good at pre­tend­ing to be Steve Jobs in front of a CEO who couldn’t tell the differ­ence.

Similarly, peo­ple with se­cu­rity mind­set can no­tice when other peo­ple lack it, but I’d worry that an or­di­nary para­noid would have a hard time tel­ling the differ­ence, which would make it hard for them to hire a truly com­pe­tent ad­vi­sor. And of course lots of the peo­ple in the larger so­cial sys­tem be­hind tech­nol­ogy pro­jects lack even the or­di­nary para­noia that many good pro­gram­mers pos­sess, and they just end up with empty suits talk­ing a lot about “risk” and “safety”. In other words, if we’re talk­ing about some­thing as hard as build­ing a se­cure op­er­at­ing sys­tem, and your pro­ject hasn’t started up already headed up by some­one with the full se­cu­rity mind­set, you are in trou­ble. Where by “in trou­ble” I mean “to­tally, ir­re­triev­ably doomed”.

am­ber: Look, uh, there’s a cer­tain pro­ject I’m in­vested in which has raised a hun­dred mil­lion dol­lars to cre­ate mer­chant drones.

coral: Mer­chant drones?

am­ber: So there are a lot of coun­tries that have poor mar­ket in­fras­truc­ture, and the idea is, we’re go­ing to make drones that fly around buy­ing and sel­l­ing things, and they’ll use ma­chine learn­ing to figure out what prices to pay and so on. We’re not just in it for the money; we think it could be a huge eco­nomic boost to those coun­tries, re­ally help them move for­wards.

coral: Dear God. Okay. There are ex­actly two things your com­pany is about: sys­tem se­cu­rity, and reg­u­la­tory com­pli­ance. Well, and also mar­ket­ing, but that doesn’t count be­cause ev­ery com­pany is about mar­ket­ing. It would be a se­vere er­ror to imag­ine that your com­pany is about any­thing else, such as drone hard­ware or ma­chine learn­ing.

am­ber: Well, the sen­ti­ment in­side the com­pany is that the time to be­gin think­ing about le­gal­ities and se­cu­rity will be af­ter we’ve proven we can build a pro­to­type and have at least a small pi­lot mar­ket in progress. I mean, un­til we know how peo­ple are us­ing the sys­tem and how the soft­ware ends up work­ing, it’s hard to see how we could do any pro­duc­tive think­ing about se­cu­rity or com­pli­ance that wouldn’t just be pure spec­u­la­tion.

coral: Ha! Ha, ha­haha… oh my god you’re not jok­ing.

am­ber: What?

coral: Please tell me that what you ac­tu­ally mean is that you have a se­cu­rity and reg­u­la­tory roadmap which calls for you to do some of your work later, but clearly lays out what work needs to be done, when you are to start do­ing it, and when each mile­stone needs to be com­plete. Surely you don’t liter­ally mean that you in­tend to start think­ing about it later?

am­ber: A lot of times at lunch we talk about how an­noy­ing it is that we’ll have to deal with reg­u­la­tions and how much bet­ter it would be if gov­ern­ments were more liber­tar­ian. That counts as think­ing about it, right?

coral: Oh my god.

am­ber: I don’t see how we could have a se­cu­rity plan when we don’t know ex­actly what we’ll be se­cur­ing. Wouldn’t the plan just turn out to be wrong?

coral: All busi­ness plans for star­tups turn out to be wrong, but you still need them—and not just as works of fic­tion. They rep­re­sent the writ­ten form of your cur­rent be­liefs about your key as­sump­tions. Writ­ing down your busi­ness plan checks whether your cur­rent be­liefs can pos­si­bly be co­her­ent, and sug­gests which crit­i­cal be­liefs to test first, and which re­sults should set off alarms, and when you are fal­ling be­hind key sur­vival thresh­olds. The idea isn’t that you stick to the busi­ness plan; it’s that hav­ing a busi­ness plan (a) checks that it seems pos­si­ble to suc­ceed in any way what­so­ever, and (b) tells you when one of your be­liefs is be­ing falsified so you can ex­plic­itly change the plan and adapt. Hav­ing a writ­ten plan that you in­tend to rapidly re­vise in the face of new in­for­ma­tion is one thing. NOT HAVING A PLAN is an­other.

am­ber: The thing is, I am a lit­tle wor­ried that the head of the pro­ject, Mr. Topaz, isn’t con­cerned enough about the pos­si­bil­ity of some­body fool­ing the drones into giv­ing out money when they shouldn’t. I mean, I’ve tried to raise that con­cern, but he says that of course we’re not go­ing to pro­gram the drones to give out money to just any­one. Can you maybe give him a few tips? For when it comes time to start think­ing about se­cu­rity, I mean.

coral: Oh. Oh, my dear, sweet sum­mer child, I’m sorry. There’s noth­ing I can do for you.

am­ber: Huh? But you haven’t even looked at our beau­tiful busi­ness model!

coral: I thought maybe your com­pany merely had a hope­less case of un­der­es­ti­mated difficul­ties and mis­placed pri­ori­ties. But now it sounds like your leader is not even us­ing or­di­nary para­noia, and re­acts with skep­ti­cism to it. Cal­ling a case like that “hope­less” would be an un­der­state­ment.

am­ber: But a se­cu­rity failure would be very bad for the coun­tries we’re try­ing to help! They need se­cure mer­chant drones!

coral: Then they will need drones built by some pro­ject that is not led by Mr. Topaz.

am­ber: But that seems very hard to ar­range!

coral: …I don’t un­der­stand what you are say­ing that is sup­posed to con­tra­dict any­thing I am say­ing.

am­ber: Look, aren’t you judg­ing Mr. Topaz a lit­tle too quickly? Se­ri­ously.

coral: I haven’t met him, so it’s pos­si­ble you mis­rep­re­sented him to me. But if you’ve ac­cu­rately rep­re­sented his at­ti­tude? Then, yes, I did judge quickly, but it’s a hell of a good guess. Se­cu­rity mind­set is already rare on pri­ors. “I don’t plan to make my drones give away money to ran­dom peo­ple” means he’s imag­in­ing how his sys­tem could work as he in­tends, in­stead of imag­in­ing how it might not work as he in­tends. If some­body doesn’t even ex­hibit or­di­nary para­noia, spon­ta­neously on their own cog­nizance with­out ex­ter­nal prompt­ing, then they can­not do se­cu­rity, pe­riod. Re­act­ing in­dig­nantly to the sug­ges­tion that some­thing might go wrong is even be­yond that level of hope­less­ness, but the base level was hope­less enough already.

am­ber: Look… can you just go to Mr. Topaz and try to tell him what he needs to do to add some se­cu­rity onto his drones? Just try? Be­cause it’s su­per im­por­tant.

coral: I could try, yes. I can’t suc­ceed, but I could try.

am­ber: Oh, but please be care­ful to not be harsh with him. Don’t put the fo­cus on what he’s do­ing wrong—and try to make it clear that these prob­lems aren’t too se­ri­ous. He’s been put off by the me­dia alarmism sur­round­ing apoc­a­lyp­tic sce­nar­ios with armies of evil drones filling the sky, and it took me some trou­ble to con­vince him that I wasn’t just an­other alarmist full of fan­ciful catas­tro­phe sce­nar­ios of drones defy­ing their own pro­gram­ming.

coral:

am­ber: And maybe try to keep your open­ing con­ver­sa­tion away from what might sound like crazy edge cases, like some­body for­get­ting to check the end of a buffer and an ad­ver­sary throw­ing in a huge string of char­ac­ters that over­write the end of the stack with a re­turn ad­dress that jumps to a sec­tion of code some­where else in the sys­tem that does some­thing the ad­ver­sary wants. I mean, you’ve con­vinced me that these far-fetched sce­nar­ios are worth wor­ry­ing about, if only be­cause they might be ca­naries in the coal mine for more re­al­is­tic failure modes. But Mr. Topaz thinks that’s all a bit silly, and I don’t think you should open by try­ing to ex­plain to him on a meta level why it isn’t. He’d prob­a­bly think you were be­ing con­de­scend­ing, tel­ling him how to think. Espe­cially when you’re just an op­er­at­ing-sys­tems guy and you have no ex­pe­rience build­ing drones and see­ing what ac­tu­ally makes them crash. I mean, that’s what I think he’d say to you.

coral:

am­ber: Also, start with the cheaper in­ter­ven­tions when you’re giv­ing ad­vice. I don’t think Mr. Topaz is go­ing to re­act well if you tell him that he needs to start all over in an­other pro­gram­ming lan­guage, or es­tab­lish a re­view board for all code changes, or what­ever. He’s wor­ried about com­peti­tors reach­ing the mar­ket first, so he doesn’t want to do any­thing that will slow him down.

coral:

am­ber: Uh, Co­ral?

coral: … on his novel pro­ject, en­ter­ing new ter­ri­tory, do­ing things not ex­actly like what has been done be­fore, car­ry­ing out novel mis­sion-crit­i­cal sub­tasks for which there are no stan­dard­ized best se­cu­rity prac­tices, nor any known un­der­stand­ing of what makes the sys­tem ro­bust or not-ro­bust.

am­ber: Right!

coral: And Mr. Topaz him­self does not seem much ter­rified of this ter­rify­ing task be­fore him.

am­ber: Well, he’s wor­ried about some­body else mak­ing mer­chant drones first and mi­sus­ing this key eco­nomic in­fras­truc­ture for bad pur­poses. That’s the same ba­sic thing, right? Like, it demon­strates that he can worry about things?

coral: It is ut­terly differ­ent. Mon­keys who can be afraid of other mon­keys get­ting to the ba­nanas first are far, far more com­mon than mon­keys who worry about whether the ba­nanas will ex­hibit weird sys­tem be­hav­iors in the face of ad­verse op­ti­miza­tion.

am­ber: Oh.

coral: I’m afraid it is only slightly more prob­a­ble that Mr. Topaz will over­see the cre­ation of ro­bust soft­ware than that the Moon will spon­ta­neously trans­form into or­gan­i­cally farmed goat cheese.

am­ber: I think you’re be­ing too harsh on him. I’ve met Mr. Topaz, and he seemed pretty bright to me.

coral: Again, as­sum­ing you’re rep­re­sent­ing him ac­cu­rately, Mr. Topaz seems to lack what I called or­di­nary para­noia. If he does have that abil­ity as a cog­ni­tive ca­pac­ity, which many bright pro­gram­mers do, then he ob­vi­ously doesn’t feel pas­sion­ate about ap­ply­ing that para­noia to his drone pro­ject along key di­men­sions. It also sounds like Mr. Topaz doesn’t re­al­ize there’s a skill that he is miss­ing, and would be in­sulted by the sug­ges­tion. I am put in mind of the story of the farmer who was asked by a pass­ing driver for di­rec­tions to get to Point B, to which the farmer replied, “If I was try­ing to get to Point B, I sure wouldn’t start from here.”

am­ber: Mr. Topaz has made some sig­nifi­cant ad­vances in drone tech­nol­ogy, so he can’t be stupid, right?

coral: “Se­cu­rity mind­set” seems to be a dis­tinct cog­ni­tive tal­ent from g fac­tor or even pro­gram­ming abil­ity. In fact, there doesn’t seem to be a level of hu­man ge­nius that even guaran­tees you’ll be skil­led at or­di­nary para­noia. Which does make some se­cu­rity pro­fes­sion­als feel a bit weird, my­self in­cluded—the same way a lot of pro­gram­mers have trou­ble un­der­stand­ing why not ev­ery­one can learn to pro­gram. But it seems to be an ob­ser­va­tional fact that both or­di­nary para­noia and se­cu­rity mind­set are things that can de­cou­ple from g fac­tor and pro­gram­ming abil­ity—and if this were not the case, the In­ter­net would be far more se­cure than it is.

am­ber: Do you think it would help if we talked to the other VCs fund­ing this pro­ject and got them to ask Mr. Topaz to ap­point a Spe­cial Ad­vi­sor on Ro­bust­ness re­port­ing di­rectly to the CTO? That sounds poli­ti­cally difficult to me, but it’s pos­si­ble we could swing it. Once the press started spec­u­lat­ing about drones go­ing rogue and maybe ag­gre­gat­ing into larger Voltron-like robots that could ac­quire laser eyes, Mr. Topaz did tell the VCs that he was very con­cerned about the ethics of drone safety and that he’d had many long con­ver­sa­tions about it over lunch hours.

coral: I’m ven­tur­ing slightly out­side my own ex­per­tise here, which isn’t cor­po­rate poli­tics per se. But on a pro­ject like this one that’s try­ing to en­ter novel ter­ri­tory, I’d guess the per­son with se­cu­rity mind­set needs at least cofounder sta­tus, and must be per­son­ally trusted by any cofounders who don’t have the skill. It can’t be an out­sider who was brought in by VCs, who is op­er­at­ing on limited poli­ti­cal cap­i­tal and needs to win an ar­gu­ment ev­ery time she wants to not have all the ser­vices con­ve­niently turned on by de­fault. I sus­pect you just have the wrong per­son in charge of this startup, and that this prob­lem is not re­pairable.

am­ber: Please don’t just give up! Even if things are as bad as you say, just in­creas­ing our pro­ject’s prob­a­bil­ity of be­ing se­cure from 0% to 10% would be very valuable in ex­pec­ta­tion to all those peo­ple in other coun­tries who need mer­chant drones.

coral: …look, at some point in life we have to try to triage our efforts and give up on what can’t be sal­vaged. There’s of­ten a lo­gis­tic curve for suc­cess prob­a­bil­ities, you know? The dis­tances are mea­sured in mul­ti­plica­tive odds, not ad­di­tive per­centage points. You can’t take a pro­ject like this and as­sume that by putting in some more hard work, you can in­crease the ab­solute chance of suc­cess by 10%. More like, the odds of this pro­ject’s failure ver­sus suc­cess start out as 1,000,000:1, and if we’re very po­lite and nav­i­gate around Mr. Topaz’s sense that he is higher-sta­tus than us and man­age to ex­plain a few tips to him with­out ever sound­ing like we think we know some­thing he doesn’t, we can quin­tu­ple his chances of suc­cess and send the odds to 200,000:1. Which is to say that in the world of per­centage points, the odds go from 0.0% to 0.0%. That’s one way to look at the “law of con­tinued failure”.

If you had the kind of pro­ject where the fun­da­men­tals im­plied, say, a 15% chance of suc­cess, you’d then be on the right part of the lo­gis­tic curve, and in that case it could make a lot of sense to hunt for ways to bump that up to a 30% or 80% chance.

am­ber: Look, I’m wor­ried that it will re­ally be very bad if Mr. Topaz reaches the mar­ket first with in­se­cure drones. Like, I think that mer­chant drones could be very benefi­cial to coun­tries with­out much ex­ist­ing mar­ket back­bone, and if there’s a grand failure—es­pe­cially if some of the would-be cus­tomers have their money or items stolen—then it could poi­son the po­ten­tial mar­ket for years. It will be ter­rible! Really, gen­uinely ter­rible!

coral: Wow. That sure does sound like an un­pleas­ant sce­nario to have wedged your­self into.

am­ber: But what do we do now?

coral: Damned if I know. I do sus­pect you’re screwed so long as you can only win if some­body like Mr. Topaz cre­ates a ro­bust sys­tem. I guess you could try to have some other drone pro­ject come into ex­is­tence, headed up by some­body that, say, Bruce Sch­neier as­sures ev­ery­one is un­usu­ally good at se­cu­rity-mind­set think­ing and hence can hire peo­ple like me and listen to all the harsh things we have to say. Though I have to ad­mit, the part where you think it’s dras­ti­cally im­por­tant that you beat an in­se­cure sys­tem to mar­ket with a se­cure sys­tem—well, that sounds pos­i­tively night­mar­ish. You’re go­ing to need a lot more re­sources than Mr. Topaz has, or some other kind of very ma­jor ad­van­tage. Se­cu­rity takes time.

am­ber: Is it re­ally that hard to add se­cu­rity to the drone sys­tem?

coral: You keep talk­ing about “adding” se­cu­rity. Sys­tem ro­bust­ness isn’t the kind of prop­erty you can bolt onto soft­ware as an af­terthought.

am­ber: I guess I’m hav­ing trou­ble see­ing why it’s so much more ex­pen­sive. Like, if some­body fool­ishly builds an OS that gives ac­cess to just any­one, you could in­stead put a pass­word lock on it, us­ing your clever sys­tem where the OS keeps the hashes of the pass­words in­stead of the pass­words. You just spend a cou­ple of days rewrit­ing all the ser­vices ex­posed to the In­ter­net to ask for pass­words be­fore grant­ing ac­cess. And then the OS has se­cu­rity on it! Right?

coral: NO. Every­thing in­side your sys­tem that is po­ten­tially sub­ject to ad­verse se­lec­tion in its prob­a­bil­ity of weird be­hav­ior is a li­a­bil­ity! Every­thing ex­posed to an at­tacker, and ev­ery­thing those sub­sys­tems in­ter­act with, and ev­ery­thing those parts in­ter­act with! You have to build all of it ro­bustly! If you want to build a se­cure OS you need a whole spe­cial pro­ject that is “build­ing a se­cure op­er­at­ing sys­tem in­stead of an in­se­cure op­er­at­ing sys­tem”. And you also need to re­strict the scope of your am­bi­tions, and not do ev­ery­thing you want to do, and obey other com­mand­ments that will feel like big un­pleas­ant sac­ri­fices to some­body who doesn’t have the full se­cu­rity mind­set. OpenBSD can’t do a tenth of what Ubuntu does. They can’t af­ford to! It would be too large of an at­tack sur­face! They can’t re­view that much code us­ing the spe­cial pro­cess that they use to de­velop se­cure soft­ware! They can’t hold that many as­sump­tions in their minds!

am­ber: Does that effort have to take a sig­nifi­cant amount of ex­tra time? Are you sure it can’t just be done in a cou­ple more weeks if we hurry?

coral: YES. Given that this is a novel pro­ject en­ter­ing new ter­ri­tory, ex­pect it to take at least two years more time, or 50% more de­vel­op­ment time—whichever is less—com­pared to a se­cu­rity-in­cau­tious pro­ject that oth­er­wise has iden­ti­cal tools, in­sights, peo­ple, and re­sources. And that is a very, very op­ti­mistic lower bound.

am­ber: This story seems to be head­ing in a wor­ry­ing di­rec­tion.

coral: Well, I’m sorry, but cre­at­ing ro­bust sys­tems takes longer than cre­at­ing non-ro­bust sys­tems even in cases where it would be re­ally, ex­traor­di­nar­ily bad if cre­at­ing ro­bust sys­tems took longer than cre­at­ing non-ro­bust sys­tems.

am­ber: Couldn’t it be the case that, like, pro­jects which are im­ple­ment­ing good se­cu­rity prac­tices do ev­ery­thing so much cleaner and bet­ter that they can come to mar­ket faster than any in­se­cure com­peti­tors could?

coral: … I hon­estly have trou­ble see­ing why you’re priv­ileg­ing that hy­poth­e­sis for con­sid­er­a­tion. Ro­bust­ness in­volves as­surance pro­cesses that take ad­di­tional time. OpenBSD does not go through lines of code faster than Ubuntu.

But more im­por­tantly, if ev­ery­one has ac­cess to the same tools and in­sights and re­sources, then an un­usu­ally fast method of do­ing some­thing cau­tiously can always be de­gen­er­ated into an even faster method of do­ing the thing in­cau­tiously. There is not now, nor will there ever be, a pro­gram­ming lan­guage in which it is the least bit difficult to write bad pro­grams. There is not now, nor will there ever be, a method­ol­ogy that makes writ­ing in­se­cure soft­ware in­her­ently slower than writ­ing se­cure soft­ware. Any se­cu­rity pro­fes­sional who heard about your bright hopes would just laugh. Ask them too if you don’t be­lieve me.

am­ber: But shouldn’t en­g­ineers who aren’t cau­tious just be un­able to make soft­ware at all, be­cause of or­di­nary bugs?

coral: I am afraid that it is both pos­si­ble, and ex­tremely com­mon in prac­tice, for peo­ple to fix all the bugs that are crash­ing their sys­tems in or­di­nary test­ing to­day, us­ing method­olo­gies that are in­deed ad­e­quate to fix­ing or­di­nary bugs that show up of­ten enough to af­flict a sig­nifi­cant frac­tion of users, and then ship the product. They get ev­ery­thing work­ing to­day, and they don’t feel like they have the slack to de­lay any longer than that be­fore ship­ping be­cause the product is already be­hind sched­ule. They don’t hire ex­cep­tional peo­ple to do ten times as much work in or­der to pre­vent the product from hav­ing holes that only show up un­der ad­verse op­ti­miza­tion pres­sure, that some­body else finds first and that they learn about af­ter it’s too late.

It’s not even the wrong de­ci­sion, for prod­ucts that aren’t con­nected to the In­ter­net, don’t have enough users for one to go rogue, don’t han­dle money, don’t con­tain any valuable data, and don’t do any­thing that could in­jure peo­ple if some­thing goes wrong. If your soft­ware doesn’t de­stroy any­thing im­por­tant when it ex­plodes, it’s prob­a­bly a bet­ter use of limited re­sources to plan on fix­ing bugs as they show up.

… Of course, you need some amount of se­cu­rity mind­set to re­al­ize which soft­ware can in fact de­stroy the com­pany if it silently cor­rupts data and no­body no­tices this un­til a month later. I don’t sup­pose it’s the case that your drones only carry a limited amount of the full cor­po­rate bud­get in cash over the course of a day, and you always have more than enough money to re­im­burse all the cus­tomers if all items in tran­sit over a day were lost, tak­ing into ac­count that the drones might make many more pur­chases or sales than usual? And that the sys­tems are gen­er­at­ing in­ter­nal pa­per re­ceipts that are clearly shown to the cus­tomer and non-elec­tron­i­cally rec­on­ciled once per day, thereby en­abling you to no­tice a prob­lem be­fore it’s too late?

am­ber: Nope!

coral: Then as you say, it would be bet­ter for the world if your com­pany didn’t ex­ist and wasn’t about to charge into this new ter­ri­tory and poi­son it with a spec­tac­u­lar screwup.

am­ber: If I be­lieved that… well, Mr. Topaz cer­tainly isn’t go­ing to stop his pro­ject or let some­body else take over. It seems the log­i­cal im­pli­ca­tion of what you say you be­lieve is that I should try to per­suade the ven­ture cap­i­tal­ists I know to launch a safer drone pro­ject with even more fund­ing.

coral: Uh, I’m sorry to be blunt about this, but I’m not sure you have a high enough level of se­cu­rity mind­set to iden­tify an ex­ec­u­tive who’s suffi­ciently bet­ter than you at it. Try­ing to get enough of a re­source ad­van­tage to beat the in­se­cure product to mar­ket is only half of your prob­lem in launch­ing a com­pet­ing pro­ject. The other half of your prob­lem is sur­pass­ing the prior rar­ity of peo­ple with truly deep se­cu­rity mind­set, and get­ting some­body like that in charge and fully com­mit­ted. Or at least get them in as a highly trusted, fully com­mit­ted cofounder who isn’t on a short bud­get of poli­ti­cal cap­i­tal. I’ll say it again: an ad­vi­sor ap­pointed by VCs isn’t nearly enough for a pro­ject like yours. Even if the ad­vi­sor is a gen­uinely good se­cu­rity pro­fes­sional—

am­ber: This all seems like an un­rea­son­ably difficult re­quire­ment! Can’t you back down on it a lit­tle?

coral: —the per­son in charge will prob­a­bly try to bar­gain down re­al­ity, as rep­re­sented by the un­wel­come voice of the se­cu­rity pro­fes­sional, who won’t have enough so­cial cap­i­tal to bad­ger them into “un­rea­son­able” mea­sures. Which means you fail on full au­to­matic.

am­ber: … Then what am I to do?

coral: I don’t know, ac­tu­ally. But there’s no point in launch­ing an­other drone pro­ject with even more fund­ing, if it just ends up with an­other Mr. Topaz put in charge. Which, by de­fault, is ex­actly what your ven­ture cap­i­tal­ist friends are go­ing to do. Then you’ve just set an even higher com­pet­i­tive bar for any­one ac­tu­ally try­ing to be first to mar­ket with a se­cure solu­tion, may God have mercy on their souls.

Be­sides, if Mr. Topaz thinks he has a com­peti­tor breath­ing down his neck and rushes his product to mar­ket, his chance of cre­at­ing a se­cure sys­tem could drop by a fac­tor of ten and go all the way from 0.0% to 0.0%.

am­ber: Surely my VC friends have faced this kind of prob­lem be­fore and know how to iden­tify and hire ex­ec­u­tives who can do se­cu­rity well?

coral: … If one of your VC friends is Paul Gra­ham, then maybe yes. But in the av­er­age case, NO.

If av­er­age VCs always made sure that pro­jects which needed se­cu­rity had a founder or cofounder with strong se­cu­rity mind­set—if they had the abil­ity to do that even in cases where they de­cided they wanted to—the In­ter­net would again look like a very differ­ent place. By de­fault, your VC friends will be fooled by some­body who looks very sober and talks a lot about how ter­ribly con­cerned he is with cy­ber­se­cu­rity and how the sys­tem is go­ing to be ul­tra-se­cure and re­ject over nine thou­sand com­mon pass­words, in­clud­ing the thirty-six pass­words listed on this slide here, and the VCs will ooh and ah over it, es­pe­cially as one of them re­al­izes that their own pass­word is on the slide. That pro­ject leader is ab­solutely not go­ing to want to hear from me—even less so than Mr. Topaz. To him, I’m a poli­ti­cal threat who might dam­age his line of pat­ter to the VCs.

am­ber: I have trou­ble be­liev­ing all these smart peo­ple are re­ally that stupid.

coral: You’re com­press­ing your in­nate sense of so­cial sta­tus and your es­ti­mated level of how good par­tic­u­lar groups are at this par­tic­u­lar abil­ity into a sin­gle di­men­sion. That is not a good idea.

am­ber: I’m not say­ing that I think ev­ery­one with high sta­tus already knows the deep se­cu­rity skill. I’m just hav­ing trou­ble be­liev­ing that they can’t learn it quickly once told, or could be stuck not be­ing able to iden­tify good ad­vi­sors who have it. That would mean they couldn’t know some­thing you know, some­thing that seems im­por­tant, and that just… feels off to me, some­how. Like, there are all these suc­cess­ful and im­por­tant peo­ple out there, and you’re say­ing you’re bet­ter than them, even with all their in­fluence, their skills, their re­sources—

coral: Look, you don’t have to take my word for it. Think of all the web­sites you’ve been on, with snazzy-look­ing de­sign, maybe with mil­lions of dol­lars in sales pass­ing through them, that want your pass­word to be a mix­ture of up­per­case and low­er­case let­ters and num­bers. In other words, they want you to en­ter “Pass­word1!” in­stead of “cor­rect horse bat­tery sta­ple”. Every one of those web­sites is do­ing a thing that looks hu­morously silly to some­one with a full se­cu­rity mind­set or even just some­body who reg­u­larly reads XKCD. It says that the se­cu­rity sys­tem was set up by some­body who didn’t know what they were do­ing and was blindly imi­tat­ing im­pres­sive-look­ing mis­takes they saw el­se­where.

Do you think that makes a good im­pres­sion on their cus­tomers? That’s right, it does! Be­cause the cus­tomers don’t know any bet­ter. Do you think that lo­gin sys­tem makes a good im­pres­sion on the com­pany’s in­vestors, in­clud­ing pro­fes­sional VCs and prob­a­bly some an­gels with their own startup ex­pe­rience? That’s right, it does! Be­cause the VCs don’t know any bet­ter, and even the an­gel doesn’t know any bet­ter, and they don’t re­al­ize they’re miss­ing a vi­tal skill, and they aren’t con­sult­ing any­one who knows more. An in­no­cent is im­pressed if a web­site re­quires a mix of up­per­case and low­er­case let­ters and num­bers and punc­tu­a­tion. They think the peo­ple run­ning the web­site must re­ally care to im­pose a se­cu­rity mea­sure that un­usual and in­con­ve­nient. The peo­ple run­ning the web­site think that’s what they’re do­ing too.

Peo­ple with deep se­cu­rity mind­set are both rare and rarely ap­pre­ci­ated. You can see just from the lo­gin sys­tem that none of the VCs and none of the C-level ex­ec­u­tives at that startup thought they needed to con­sult a real pro­fes­sional, or man­aged to find a real pro­fes­sional rather than an empty suit if they went con­sult­ing. There was, visi­bly, no­body in the neigh­bor­ing sys­tem with the com­bined knowl­edge and sta­tus to walk over to the CEO and say, “Your lo­gin sys­tem is em­bar­rass­ing and you need to hire a real se­cu­rity pro­fes­sional.” Or if any­body did say that to the CEO, the CEO was offended and shot the mes­sen­ger for not phras­ing it ever-so-po­litely enough, or the CTO saw the out­sider as a poli­ti­cal threat and bad-mouthed them out of the game.

Your wish­ful should-uni­verse hy­poth­e­sis that peo­ple who can touch the full se­cu­rity mind­set are more com­mon than that within the ven­ture cap­i­tal and an­gel in­vest­ing ecosys­tem is just flat wrong. Or­di­nary para­noia di­rected at widely-known ad­ver­sar­ial cases is dense enough within the larger ecosys­tem to ex­ert wide­spread so­cial in­fluence, albeit still com­i­cally ab­sent in many in­di­vi­d­u­als and re­gions. Peo­ple with the full se­cu­rity mind­set are too rare to have the same level of pres­ence. That’s the eas­ily visi­ble truth. You can see the lo­gin sys­tems that want a punc­tu­a­tion mark in your pass­word. You are not hal­lu­ci­nat­ing them.

am­ber: If that’s all true, then I just don’t see how I can win. Maybe I should just con­di­tion on ev­ery­thing you say be­ing false, since, if it’s true, my win­ning seems un­likely—in which case all vic­to­ries on my part would come in wor­lds with other back­ground as­sump­tions.

coral: … is that some­thing you say of­ten?

am­ber: Well, I say it when­ever my vic­tory starts to seem suffi­ciently un­likely.

coral: Good­ness. I could maybe, maybe see some­body say­ing that once over the course of their en­tire life­time, for a sin­gle un­likely con­di­tional, but do­ing it more than once is sheer mad­ness. I’d ex­pect the un­likely con­di­tion­als to build up very fast and drop the prob­a­bil­ity of your men­tal world to effec­tively zero. It’s tempt­ing, but it’s usu­ally a bad idea to slip side­ways into your own pri­vate hal­lu­ci­na­tory uni­verse when you feel you’re un­der emo­tional pres­sure. I tend to be­lieve that no mat­ter what the difficul­ties, we are most likely to come up with good plans when we are men­tally liv­ing in re­al­ity as op­posed to some­where else. If things seem difficult, we must face the difficulty squarely to suc­ceed, to come up with some solu­tion that faces down how bad the situ­a­tion re­ally is, rather than de­cid­ing to con­di­tion on things not be­ing difficult be­cause then it’s too hard.

am­ber: Can you at least try talk­ing to Mr. Topaz and ad­vise him how to make things be se­cure?

coral: Sure. Try­ing things is easy, and I’m a char­ac­ter in a di­alogue, so my op­por­tu­nity costs are low. I’m sure Mr. Topaz is try­ing to build se­cure mer­chant drones, too. It’s suc­ceed­ing at things that is the hard part.

am­ber: Great, I’ll see if I can get Mr. Topaz to talk to you. But do please be po­lite! If you think he’s do­ing some­thing wrong, try to point it out more gen­tly than the way you’ve talked to me. I think I have enough poli­ti­cal cap­i­tal to get you in the door, but that won’t last if you’re rude.

coral: You know, back in main­stream com­puter se­cu­rity, when you pro­pose a new way of se­cur­ing a sys­tem, it’s con­sid­ered tra­di­tional and wise for ev­ery­one to gather around and try to come up with rea­sons why your idea might not work. It’s un­der­stood that no mat­ter how smart you are, most seem­ingly bright ideas turn out to be flawed, and that you shouldn’t be touchy about peo­ple try­ing to shoot them down. Does Mr. Topaz have no ac­quain­tance at all with the prac­tices in com­puter se­cu­rity? A lot of pro­gram­mers do.

am­ber: I think he’d say he re­spects com­puter se­cu­rity as its own field, but he doesn’t be­lieve that build­ing se­cure op­er­at­ing sys­tems is the same prob­lem as build­ing mer­chant drones.

coral: And if I sug­gested that this case might be similar to the prob­lem of build­ing a se­cure op­er­at­ing sys­tem, and that this case cre­ates a similar need for more effort­ful and cau­tious de­vel­op­ment, re­quiring both (a) ad­di­tional de­vel­op­ment time and (b) a spe­cial need for cau­tion sup­plied by peo­ple with un­usual mind­sets above and be­yond or­di­nary para­noia, who have an un­usual skill that iden­ti­fies shaky as­sump­tions in a safety story be­fore an or­di­nary para­noid would judge a fire as be­ing ur­gent enough to need putting out, who can rem­edy the prob­lem us­ing deeper solu­tions than an or­di­nary para­noid would gen­er­ate as par­ries against imag­ined at­tacks?

If I sug­gested, in­deed, that this sce­nario might hold gen­er­ally wher­ever we de­mand ro­bust­ness of a com­plex sys­tem that is be­ing sub­jected to strong ex­ter­nal or in­ter­nal op­ti­miza­tion pres­sures? Pres­sures that strongly pro­mote the prob­a­bil­ities of par­tic­u­lar states of af­fairs via op­ti­miza­tion that searches across a large and com­plex state space? Pres­sures which there­fore in turn sub­ject other sub­parts of the sys­tem to se­lec­tion for weird states and pre­vi­ously un­en­vi­sioned ex­e­cu­tion paths? Espe­cially if some of these pres­sures may be in some sense cre­ative and find states of the sys­tem or en­vi­ron­ment that sur­prise us or vi­o­late our sur­face gen­er­al­iza­tions?

am­ber: I think he’d prob­a­bly think you were try­ing to look smart by us­ing overly ab­stract lan­guage at him. Or he’d re­ply that he didn’t see why this took any more cau­tion than he was already us­ing just by test­ing the drones to make sure they didn’t crash or give out too much money.

coral: I see.

am­ber: So, shall we be off?

coral: Of course! No prob­lem! I’ll just go meet with Mr. Topaz and use ver­bal per­sua­sion to turn him into Bruce Sch­neier.

am­ber: That’s the spirit!

coral: God, how I wish I lived in the ter­ri­tory that cor­re­sponds to your map.

am­ber: Hey, come on. Is it se­ri­ously that hard to be­stow ex­cep­tion­ally rare men­tal skills on peo­ple by talk­ing at them? I agree it’s a bad sign that Mr. Topaz shows no sign of want­ing to ac­quire those skills, and doesn’t think we have enough rel­a­tive sta­tus to con­tinue listen­ing if we say some­thing he doesn’t want to hear. But that just means we have to phrase our ad­vice clev­erly so that he will want to hear it!

coral: I sup­pose you could mod­ify your mes­sage into some­thing Mr. Topaz doesn’t find so un­pleas­ant to hear. Some­thing that sounds re­lated to the topic of drone se­cu­rity, but which doesn’t cost him much, and of course does not ac­tu­ally cause his drones to end up se­cure be­cause that would be all un­pleas­ant and ex­pen­sive. You could slip a lit­tle side­ways in re­al­ity, and con­vince your­self that you’ve got­ten Mr. Topaz to ally with you, be­cause he sounds agree­able now. Your in­stinc­tive de­sire for the high-sta­tus mon­key to be on your poli­ti­cal side will feel like its prob­lem has been solved. You can sub­sti­tute the feel­ing of hav­ing solved that prob­lem for the un­pleas­ant sense of not hav­ing se­cured the ac­tual drones; you can tell your­self that the big­ger mon­key will take care of ev­ery­thing now that he seems to be on your pleas­antly-mod­ified poli­ti­cal side. And so you will be happy. Un­til the mer­chant drones hit the mar­ket, of course, but that un­pleas­ant ex­pe­rience should be brief.

am­ber: Come on, we can do this! You’ve just got to think pos­i­tively!

coral: … Well, if noth­ing else, this should be an in­ter­est­ing ex­pe­rience. I’ve never tried to do any­thing quite this doomed be­fore.