There’s a weird cold war in software design, where everyone knows that they can use ‘security’ to win any argument, but we must all refrain from doing so, because that ratchet only goes one way.
The deal is that no one can ever argue against ‘security’, so you always win if you bring it up, but if you use that against me I’ll retaliate, and the project will fail (very very securely).
Also, unrelated, if I you ever hear someone bragging about their amazing release process, just nod and ask them about the emergency release process. That’s what they ACTUALLY use.
When we get into discussions about security, the best tools I’ve found are:
Attack Trees: If someone wants to add a new security feature they have to justify it by pointing at an attack that is not covered by other mitigations.
Cost/Risk analysis: Decide if it worth worrying about state-level actors/professionals criminals/script kiddies.
no one can ever argue against ‘security’, so you always win if you bring it up
Doesn’t work for me. I am the guy saying “we should not be doing X, because when you google for X, the first three results are all telling you that you definitely shouldn’t be doing X”, and everyone else is “dude, you already spent the whole day trying to solve this issue, just do it the easy way and move on to the other urgent high-priority tasks”.
Probably depends on the type of a company, i.e. what is the trade-off between “doing the project faster” and “covering your ass” for your superiors. If they have little to lose by being late, but can potentially get sued for ignoring a security issue, then yes, this is really scary.
A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. Have daily meetups asking him ironically whether he is still working on that one simple task. But also make him sign a document that you can deduct his yearly salary if he knowingly ignores a security issue. -- Now he has an incentive to shut up about the security issues (to avoid giving a proof that he knew about them).
I mean, not seriously, but I’ve done 2 decades in the industry, at a total of 5 companies, and I see it everywhere.
Dev A: We should do this with a cloud based whatver.
Dev B: No, no, we should stick with our desktop app.
Bosses: Hmm…
Dev A (triumphantly): No, no, putting everything on the cloud is BEST PRAKTUS!!!!
Bosses: (Gasp!!)
Dev B: (in desperation, transgressing...) What about....security?
Bosses (Double gasp)
Dev A; (disbelief) You wouldn’t.…
Dev B: A’s mad scheme exposes us to the viruses and also the worms.
Bosses: We agree with B!
Dev A: You realize, of course, this means war.
(Much later)
Dev B: I’m just saying that we could try ‘not’ encoding every string in pig latin, as most people would be able to decrypt this with minimal effort and it is massively increasing our translation budgets
Dev A: So you are in favor of making our software less secure?
Dev B: hahahah, no, of course not. That was just a test. I’m a double red belt qualified expert in Security Singing from every App academy. I was just making sure that you were too.
There are elements and leanings toward this combative view of security in a whole lot of companies, both in IT departments and in software-focused corporations. I haven’t seen even a small fraction of such places (only maybe a few hundred directly and indirectly), but it seems rare that it gets to strategic levels (aka cold war with each side hesitant to change the status quo) - most places are aware of the tradeoffs and able to make risk-estimate-based decisions. It helps a LOT to have developers do the initial risk and attack value estimates.
I’ll agree about the emergency/patch deployment process being the one to focus on. There’s something akin to Gresham’s law in ops methodology—bad process drives out good.
heh. Consultants are the people who couldn’t meet our hiring bar, so we pay them twice as much to avoid any long-term responsibility for outcomes. They are useful at making sure our devs have asked the right questions and considered the right options. But the actual analysis and decision starts and ends on the team (and management) that’s going to actually run the system and deal with the consequences.
Not everywhere, and not as completely sane as I’m stating it—there’s a lot of truth in Dilbert. But if it’s too bad where you are, go elsewhere. There are good software teams and they’re hiring.
Do you have a reliable way to distinguish good teams from bad ones, before you sign the paperwork and put in your notice?
I’ve stayed in jobs I wanted to leave a couple of times now, because my team was a reasonably good team and I was afraid that elsewhere I would end up with Dilbert’s boss.
More importantly, the overall software dev market is such that you can change 3-4 times in one year without really limiting your prospects, as long as you can explain what you’re looking for and why you think the next one is it. You probably can’t do that two years in a row, but trying a new job isn’t a life sentence, it’s an exploration.
There’s a weird cold war in software design, where everyone knows that they can use ‘security’ to win any argument, but we must all refrain from doing so, because that ratchet only goes one way.
The deal is that no one can ever argue against ‘security’, so you always win if you bring it up, but if you use that against me I’ll retaliate, and the project will fail (very very securely).
Also, unrelated, if I you ever hear someone bragging about their amazing release process, just nod and ask them about the emergency release process. That’s what they ACTUALLY use.
When we get into discussions about security, the best tools I’ve found are:
Attack Trees: If someone wants to add a new security feature they have to justify it by pointing at an attack that is not covered by other mitigations.
Cost/Risk analysis: Decide if it worth worrying about state-level actors/professionals criminals/script kiddies.
Doesn’t work for me. I am the guy saying “we should not be doing X, because when you google for X, the first three results are all telling you that you definitely shouldn’t be doing X”, and everyone else is “dude, you already spent the whole day trying to solve this issue, just do it the easy way and move on to the other urgent high-priority tasks”.
Probably depends on the type of a company, i.e. what is the trade-off between “doing the project faster” and “covering your ass” for your superiors. If they have little to lose by being late, but can potentially get sued for ignoring a security issue, then yes, this is really scary.
A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. Have daily meetups asking him ironically whether he is still working on that one simple task. But also make him sign a document that you can deduct his yearly salary if he knowingly ignores a security issue. -- Now he has an incentive to shut up about the security issues (to avoid giving a proof that he knew about them).
“A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. ”
Thanks, Satan!
Ain’t no such thing.
Which particular corner of software do you have in mind?
All of it?
I mean, not seriously, but I’ve done 2 decades in the industry, at a total of 5 companies, and I see it everywhere.
Dev A: We should do this with a cloud based whatver. Dev B: No, no, we should stick with our desktop app. Bosses: Hmm… Dev A (triumphantly): No, no, putting everything on the cloud is BEST PRAKTUS!!!! Bosses: (Gasp!!) Dev B: (in desperation, transgressing...) What about....security? Bosses (Double gasp) Dev A; (disbelief) You wouldn’t.… Dev B: A’s mad scheme exposes us to the viruses and also the worms. Bosses: We agree with B!
Dev A: You realize, of course, this means war.
(Much later)
Dev B: I’m just saying that we could try ‘not’ encoding every string in pig latin, as most people would be able to decrypt this with minimal effort and it is massively increasing our translation budgets Dev A: So you are in favor of making our software less secure? Dev B: hahahah, no, of course not. That was just a test. I’m a double red belt qualified expert in Security Singing from every App academy. I was just making sure that you were too.
There are elements and leanings toward this combative view of security in a whole lot of companies, both in IT departments and in software-focused corporations. I haven’t seen even a small fraction of such places (only maybe a few hundred directly and indirectly), but it seems rare that it gets to strategic levels (aka cold war with each side hesitant to change the status quo) - most places are aware of the tradeoffs and able to make risk-estimate-based decisions. It helps a LOT to have developers do the initial risk and attack value estimates.
I’ll agree about the emergency/patch deployment process being the one to focus on. There’s something akin to Gresham’s law in ops methodology—bad process drives out good.
“developers do the initial risk and attack value estimates”
You mean trust in-house devs? Heresy! If they were any good they wouldn’t work here! Only consultants can be relied upon.
heh. Consultants are the people who couldn’t meet our hiring bar, so we pay them twice as much to avoid any long-term responsibility for outcomes. They are useful at making sure our devs have asked the right questions and considered the right options. But the actual analysis and decision starts and ends on the team (and management) that’s going to actually run the system and deal with the consequences.
Not everywhere, and not as completely sane as I’m stating it—there’s a lot of truth in Dilbert. But if it’s too bad where you are, go elsewhere. There are good software teams and they’re hiring.
Do you have a reliable way to distinguish good teams from bad ones, before you sign the paperwork and put in your notice?
I’ve stayed in jobs I wanted to leave a couple of times now, because my team was a reasonably good team and I was afraid that elsewhere I would end up with Dilbert’s boss.
Not terribly reliable, but you can get a start by asking Joel’s questions (https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/) and Kate Mats ones (http://katemats.com/questions-for-candidates-to-ask-in-an-interview/).
More importantly, the overall software dev market is such that you can change 3-4 times in one year without really limiting your prospects, as long as you can explain what you’re looking for and why you think the next one is it. You probably can’t do that two years in a row, but trying a new job isn’t a life sentence, it’s an exploration.