I was hoping computer security would be a coherent, interconnected body of knowledge with important, deep ideas (like classes I’ve taken on databases, AI, linear algebra, etc.) This is kinda true for cryptography, but computer security as a whole is a hodgepodge of unrelated topics. The class was a mixture of technical information about specific flaws that can occur in specific systems (buffer overflows in C programs, XSS attacks in web applications, etc.) and “platitudes”. (Here is a list of security principles taught in the class. Principles Eliezer touched on: “least privilege”, “default deny”, “complete mediation”, “design in security from the start”, “conservative design”, and “proactively study attacks”. There might be a few more principles here that aren’t on the list too—something about doing a post mortem and trying to generate preventative measures that cover the largest possible classes that contain the failure, and something about unexpected behavior being evidence that your model is incorrect and there might be a hole you don’t see. Someone mentioned wanting a summary of Eliezer’s post—I would guess the insight density of thesenotes is higher, although the overlap with Eliezer’s post is not perfect.)
I think it might be harmful for a person interested in improving their security skills to read this essay, because it presents security mindset as an innate quality you either have or you don’t have. I’m willing to believe qualities like this exist, but the evidence Eliezer presents for security mindset being one of them seems pretty thin—and yes, he did seem overly preoccupied with whether one “has it” or not. (The exam scores in my security class followed a unimodal curve, not the bimodal curve you might expect to see if computer security ability is something one either has or doesn’t have.) The most compelling evidence Eliezer offered was this quote from Bruce Schneier:
In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.
However, it looks to me as though Schneier mostly teaches classes in government and law, so I’m not sure he has much experience to base that statement on. And in fact, he immediately follows his statement with
Which is why CSE 484, an undergraduate computer-security course taught this quarter at the University of Washington, is so interesting to watch. Professor Tadayoshi Kohno is trying to teach a security mindset.
(Why am I reading this post instead of the curricular materials for that class?)
I didn’t find the other arguments Eliezer offered very compelling. For example, the bit about requiring passwords to have numbers and symbols seems to me like a combination of (a) checking for numbers & symbols is much easier than trying to measure password entropy (b) the target audience for the feature is old people who want to use “password” as their bank password, not XKCD readers (c) as a general rule, a programmer’s objective is to get the feature implemented, and maybe cover their ass, rather than devise a theoretically optimal solution. I’m not at all convinced that security mindset is something qualitatively separate from ordinary paranoia—I would guess that ordinary paranoia and extraordinary paranoia exist on a continuum.
Anyway, as a response to Eliezer’s fixed-mindset view, here’s a quick attempt to granularize security mindset. I don’t feel very qualified to do this, but it wouldn’t surprise me if I’m more qualified than Eliezer.
First, I think just spending time thinking about security goes a long way. Eliezer characterizes this as insufficient “ordinary paranoia”, but in practice I suspect ordinary paranoia captures a lot of low-hanging fruit. The reason software has so many vulnerabilities is not because ordinary paranoia is insufficient, it’s because most organizations don’t even have ordinary paranoia. Why is that? Bruce Schneier writes:
For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don’t have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They’ll align interest with capability, and they’ll improve software security.
To give some concrete examples: one company I worked at experienced a major user data breach just before I joined. This caused a bunch of negative press, and they hired expensive consultants to look over all of their code to try and prevent future vulnerabilities. So I expect that after this event, they became more concerned with security than the average software company—and their level of concern was definitely nonzero. However, they didn’t actually do much to change their culture or internal processes after the breach. No security team got hired, there weren’t regular seminars teaching engineers about computer security, and in fact one of the most basic things which could have been done to prevent the exact same vulnerability from occurring in the future did not get done. (I noticed this after one of the company’s most senior engineers complained to me about it.) Unless you or the person who reviewed your code happened to see the hole your code created, that hole would get deployed. At another company I worked for, the company did have a security team. One time, our site fell prey to a cross-site scripting attack because… get this… one of the engineers used the standard print function, instead of the special sanitized print function that you are supposed to use in all user-facing webpages. This is something that could have easily been prevented by simply searching the entire codebase for every instance of the standard print function to make sure its usage was kosher—again, a basic thing that didn’t get done. Since security breaches take the form of rare bad events, the institutional incentives for preventing them are not very good. You won’t receive a promotion for silently preventing an attack. You probably won’t get fired for enabling an attack. And I think Eliezer himself has written about the cognitive biases that cause people to pay less attention than they should to rare bad events. Overall, software development culture just undervalues security. I personally think the computer security class I took should be mandatory to graduate, but currently it isn’t.
I think Eliezer does a disservice by dismissing security principles as “platitudes”. Security principles lie in a gray area between a checklist and heuristics for breaking creative blocks. If I was serious about security, I would absolutely curate a list of “platitudes” along with concrete examples of the platitudes being put in to action, so I could read over the list & increase their mental availability any time I needed to get my creative juices flowing. (You could start the curation process by looking through loads of security lecture notes/textbooks and trying to consolidate a principles master list. Then as you come across new examples of exploits, generate heuristics that cover the largest possible classes that contain the failure, and add them to your list if they aren’t already there.) In fact, I’ll bet a lot of what security professionals do is subconsciously internalize a bunch of flaw-finding heuristics by seeing a bunch of examples of security flaws in practice. (Eurisko is great, therefore heuristics are great?)
I also think there’s a motivational aspect here—if you take delight in finding flaws in a system, that reinforcer is going to generate motivated cognition to find a flaw in any system you are given. If you enjoy playfully thinking outside the box and impudently breaking a system’s assumptions, that’s helpful. (Maybe this part is hard to teach because impudence is hard to teach due to the teacher/student relationship—similar to how critical thinking is supposed to be hard to teach?)
But that’s not everything you need. Although computer security sounds glamorous, ultimately software vulnerabilities are bugs, and they are often very simple bugs. But unlike a user interface bug, which you can discover just by clicking around, the easiest way to identify a computer security bug is generally to see it while you are reading code. Debugging code by reading it is actually very easy to practice. Next time you are programming something, write the entire thing slowly and carefully, without doing any testing. Read over everything, try to find & fix all the bugs, and then start testing and see how many bugs you missed. In my experience, it’s possible to improve at this dramatically through practice. And it’s useful in a variety of circumstances: data analysis code which could silently do the wrong thing, backend code that’s hard to test, and impressing interviewers when you are coding on a whiteboard. Some insights that made me better at this: (1) It’s easiest to go slow and get things right on the first try, vs rushing through the initial coding process and then trying to identify flaws afterwards. Once I think I know how the code works, my eyes inevitably glaze over and I have trouble forcing myself to think things through as carefully as is necessary. To identify flaws, you need to have a literalist mindset and look for deltas between your model of how things work and how they actually work, which can be tedious. (2) Get in to a state of deep, relaxed concentration, and manage your working memory carefully. (3) Keep things simple, and structure the code so it’s easy to prove its correctness to yourself in your mind. (4) Try to identify and double-check assumptions you’re making about how library methods and language features work. (5) Be on the lookout for specific common issues such as off-by-one errors.
Taking a “meta” perspective for a moment, I noticed over the course of writing this comment that security mindset requires juggling 3 different moods which aren’t normally considered compatible. There’s paranoia. There’s the playful, creative mood of breaking someone else’s ontology. And there’s mindful diligence. (I suspect that if you showed a competent programmer the code containing Heartbleed, and told them “there is a vulnerability here”, they’d be able to find it given some time. The bug probably only lasted as long as it did because no one had taken a diligent look at that section of the code.) So finding a way to balance these 3 could be key.
Anyway, despite the fixed mindset this post promotes, it does do a good job of communicating security mindset IMO, so it’s not all bad.
Science historian James Gleick thinks that part of what separates geniuses from ordinary people is their ability to concentrate deeply. If that’s true, it seems plausible that this is a factor which can be modified without changing your genes. Remember, a lot of heuristics and biases exist so our brain can save on calories. But although being lazy might have saved precious calories in the ancestral environment, in the modern world we have plenty of calories and this is no longer an issue. (I do think I eat more frequently when I’m thinking really hard about something.) So in the same way it’s possible to develop the discipline needed to exercise, I think it’s possible to develop the discipline needed to concentrate deeply, even though it seems boring at first. See also.
I flunked all my math courses in high school—which caused a lot of static with my parents. Later, after practicing meditation for many years, I tried to learn math again. I discovered that as a result of my meditation practice, I had concentration skills that I didn’t have before. Not only was I able to learn math, I actually got quite good at it—good enough to teach it at the college level.
I recently took a class in computer security and did pretty well; here are some thoughts.
I was hoping computer security would be a coherent, interconnected body of knowledge with important, deep ideas (like classes I’ve taken on databases, AI, linear algebra, etc.) This is kinda true for cryptography, but computer security as a whole is a hodgepodge of unrelated topics. The class was a mixture of technical information about specific flaws that can occur in specific systems (buffer overflows in C programs, XSS attacks in web applications, etc.) and “platitudes”. (Here is a list of security principles taught in the class. Principles Eliezer touched on: “least privilege”, “default deny”, “complete mediation”, “design in security from the start”, “conservative design”, and “proactively study attacks”. There might be a few more principles here that aren’t on the list too—something about doing a post mortem and trying to generate preventative measures that cover the largest possible classes that contain the failure, and something about unexpected behavior being evidence that your model is incorrect and there might be a hole you don’t see. Someone mentioned wanting a summary of Eliezer’s post—I would guess the insight density of these notes is higher, although the overlap with Eliezer’s post is not perfect.)
I think it might be harmful for a person interested in improving their security skills to read this essay, because it presents security mindset as an innate quality you either have or you don’t have. I’m willing to believe qualities like this exist, but the evidence Eliezer presents for security mindset being one of them seems pretty thin—and yes, he did seem overly preoccupied with whether one “has it” or not. (The exam scores in my security class followed a unimodal curve, not the bimodal curve you might expect to see if computer security ability is something one either has or doesn’t have.) The most compelling evidence Eliezer offered was this quote from Bruce Schneier:
However, it looks to me as though Schneier mostly teaches classes in government and law, so I’m not sure he has much experience to base that statement on. And in fact, he immediately follows his statement with
(Why am I reading this post instead of the curricular materials for that class?)
I didn’t find the other arguments Eliezer offered very compelling. For example, the bit about requiring passwords to have numbers and symbols seems to me like a combination of (a) checking for numbers & symbols is much easier than trying to measure password entropy (b) the target audience for the feature is old people who want to use “password” as their bank password, not XKCD readers (c) as a general rule, a programmer’s objective is to get the feature implemented, and maybe cover their ass, rather than devise a theoretically optimal solution. I’m not at all convinced that security mindset is something qualitatively separate from ordinary paranoia—I would guess that ordinary paranoia and extraordinary paranoia exist on a continuum.
Anyway, as a response to Eliezer’s fixed-mindset view, here’s a quick attempt to granularize security mindset. I don’t feel very qualified to do this, but it wouldn’t surprise me if I’m more qualified than Eliezer.
First, I think just spending time thinking about security goes a long way. Eliezer characterizes this as insufficient “ordinary paranoia”, but in practice I suspect ordinary paranoia captures a lot of low-hanging fruit. The reason software has so many vulnerabilities is not because ordinary paranoia is insufficient, it’s because most organizations don’t even have ordinary paranoia. Why is that? Bruce Schneier writes:
To give some concrete examples: one company I worked at experienced a major user data breach just before I joined. This caused a bunch of negative press, and they hired expensive consultants to look over all of their code to try and prevent future vulnerabilities. So I expect that after this event, they became more concerned with security than the average software company—and their level of concern was definitely nonzero. However, they didn’t actually do much to change their culture or internal processes after the breach. No security team got hired, there weren’t regular seminars teaching engineers about computer security, and in fact one of the most basic things which could have been done to prevent the exact same vulnerability from occurring in the future did not get done. (I noticed this after one of the company’s most senior engineers complained to me about it.) Unless you or the person who reviewed your code happened to see the hole your code created, that hole would get deployed. At another company I worked for, the company did have a security team. One time, our site fell prey to a cross-site scripting attack because… get this… one of the engineers used the standard print function, instead of the special sanitized print function that you are supposed to use in all user-facing webpages. This is something that could have easily been prevented by simply searching the entire codebase for every instance of the standard print function to make sure its usage was kosher—again, a basic thing that didn’t get done. Since security breaches take the form of rare bad events, the institutional incentives for preventing them are not very good. You won’t receive a promotion for silently preventing an attack. You probably won’t get fired for enabling an attack. And I think Eliezer himself has written about the cognitive biases that cause people to pay less attention than they should to rare bad events. Overall, software development culture just undervalues security. I personally think the computer security class I took should be mandatory to graduate, but currently it isn’t.
I think Eliezer does a disservice by dismissing security principles as “platitudes”. Security principles lie in a gray area between a checklist and heuristics for breaking creative blocks. If I was serious about security, I would absolutely curate a list of “platitudes” along with concrete examples of the platitudes being put in to action, so I could read over the list & increase their mental availability any time I needed to get my creative juices flowing. (You could start the curation process by looking through loads of security lecture notes/textbooks and trying to consolidate a principles master list. Then as you come across new examples of exploits, generate heuristics that cover the largest possible classes that contain the failure, and add them to your list if they aren’t already there.) In fact, I’ll bet a lot of what security professionals do is subconsciously internalize a bunch of flaw-finding heuristics by seeing a bunch of examples of security flaws in practice. (Eurisko is great, therefore heuristics are great?)
I also think there’s a motivational aspect here—if you take delight in finding flaws in a system, that reinforcer is going to generate motivated cognition to find a flaw in any system you are given. If you enjoy playfully thinking outside the box and impudently breaking a system’s assumptions, that’s helpful. (Maybe this part is hard to teach because impudence is hard to teach due to the teacher/student relationship—similar to how critical thinking is supposed to be hard to teach?)
But that’s not everything you need. Although computer security sounds glamorous, ultimately software vulnerabilities are bugs, and they are often very simple bugs. But unlike a user interface bug, which you can discover just by clicking around, the easiest way to identify a computer security bug is generally to see it while you are reading code. Debugging code by reading it is actually very easy to practice. Next time you are programming something, write the entire thing slowly and carefully, without doing any testing. Read over everything, try to find & fix all the bugs, and then start testing and see how many bugs you missed. In my experience, it’s possible to improve at this dramatically through practice. And it’s useful in a variety of circumstances: data analysis code which could silently do the wrong thing, backend code that’s hard to test, and impressing interviewers when you are coding on a whiteboard. Some insights that made me better at this: (1) It’s easiest to go slow and get things right on the first try, vs rushing through the initial coding process and then trying to identify flaws afterwards. Once I think I know how the code works, my eyes inevitably glaze over and I have trouble forcing myself to think things through as carefully as is necessary. To identify flaws, you need to have a literalist mindset and look for deltas between your model of how things work and how they actually work, which can be tedious. (2) Get in to a state of deep, relaxed concentration, and manage your working memory carefully. (3) Keep things simple, and structure the code so it’s easy to prove its correctness to yourself in your mind. (4) Try to identify and double-check assumptions you’re making about how library methods and language features work. (5) Be on the lookout for specific common issues such as off-by-one errors.
Taking a “meta” perspective for a moment, I noticed over the course of writing this comment that security mindset requires juggling 3 different moods which aren’t normally considered compatible. There’s paranoia. There’s the playful, creative mood of breaking someone else’s ontology. And there’s mindful diligence. (I suspect that if you showed a competent programmer the code containing Heartbleed, and told them “there is a vulnerability here”, they’d be able to find it given some time. The bug probably only lasted as long as it did because no one had taken a diligent look at that section of the code.) So finding a way to balance these 3 could be key.
Anyway, despite the fixed mindset this post promotes, it does do a good job of communicating security mindset IMO, so it’s not all bad.
“manage your working memory carefully” <--- This sounds like a potentially important skill that I wasn’t aware of. Please could you elaborate?
I wrote some more about that here.
Science historian James Gleick thinks that part of what separates geniuses from ordinary people is their ability to concentrate deeply. If that’s true, it seems plausible that this is a factor which can be modified without changing your genes. Remember, a lot of heuristics and biases exist so our brain can save on calories. But although being lazy might have saved precious calories in the ancestral environment, in the modern world we have plenty of calories and this is no longer an issue. (I do think I eat more frequently when I’m thinking really hard about something.) So in the same way it’s possible to develop the discipline needed to exercise, I think it’s possible to develop the discipline needed to concentrate deeply, even though it seems boring at first. See also.
Shinzen Young writes about how meditation made him better at math in The Science of Enlightenment: