I’m going to naively take Schneier’s side here on the grounds that if I notice a bunch of small holes in my security I have model uncertainty about those holes unexpectedly growing larger. I would try to establish some bounds on estimates of this new parameter by looking for high profile instances. I wouldn’t be that reassured by the fact that most of my peers didn’t seem concerned if, after thinking about it for a few minutes, I couldn’t see how they were directly incentivized to be concerned. Is this flawed?
I’m going to naively take Schneier’s side here on the grounds that if I notice a bunch of small holes in my security I have model uncertainty about those holes unexpectedly growing larger. I would try to establish some bounds on estimates of this new parameter by looking for high profile instances. I wouldn’t be that reassured by the fact that most of my peers didn’t seem concerned if, after thinking about it for a few minutes, I couldn’t see how they were directly incentivized to be concerned. Is this flawed?