The Kitty Genovese Equation
Someone’s in trouble. You can hear them from your apartment, but you can’t tell if any of your neighbors are already rushing down, or already calling the police. It’s time sensitive, and you’ve got to decide now: is it worth spending those precious minutes, or not?
Let’s define our variables:
Cost to victim of nobody helping:
cost to each bystander of intervening:
Number of bystanders: (Since , for it’s always right to intervene.)
Analysis:
Suppose the bystanders all simultaneously decide whether to intervene or not, with probability p. Then expected world-utility is
Utility is maximized when ; In other words, when . Let . Then we have the optimal probability of not helping, .
One interesting implication of our solution is that the probability that the victim isn’t helped, , equals . Since , this means P(not helped) starts small at for and rapidly rises to .
Examples:
Suppose intervening would cost a minute, and the victim would live 2 years longer on average if you intervened. Then is about one in a million, . Once you get to seven bystanders, it’s optimal to not intervene 10% of the time. is about a million, so with 21 bystanders it’s optimal for each to take a 50-50 shot at helping.
If is a mere , you get there six times as fast: a 10% chance to not help at N=2, 50% around N = 4-5, and a whopping 75% chance around N=9.
Application:
This was inspired by friends’ varied willingness to intervene in public disputes, and my own experience worrying about how to respond to potential crises around me. Of course, in real life we have a lot of uncertainty around and around other people’s , and we can often wait and observe if someone goes to help. For situations where decisions are pretty simultaneous, though, it would be interesting to see how well people’s responses line up with the curve.
As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up. There’s an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi’s explored in his previous posts, and that’s part of how I’m interpreting what he’s saying here.
You don’t want to spend your precious time on blackmailing random jerks, probably. So at best, now some of your income goes toward paying a white-hat blackmailer to fend off the black-hats. (Unclear what the market for that looks like. Also, black-hatters can afford to specialize in unblackmailability; it comes up much more often for them than the average person.) You’re right, though, that it’s possible to have an equilibrium where deterrence dominates and the black-hatting incentives are low, in which case maybe the white-hat fees are low and now you have a white-hat deterrent. So this isn’t strictly bad, though my instinct is that it’s bad in most plausible cases.
That’s a fair point! A couple of counterpoints: I think risk-aversion of ‘terrorists’ helps. There’s also a point about second-order effects again; the easier it is to blackmail/extort/etc., the more people can afford to specialize in it and reap economies of scale.
Eh, sure. My guess is that Zvi is making a statement about norms as they are likely to exist in human societies with some level of intuitive-similarity to our own. I think the useful question here is like “is it possible to instantiate norms s.t. norm-violations are ~all ethical-violations”. (we’re still discussing the value of less privacy/more blackmail, right?) No-rule or few-rule communities could work for this, but I expect it to be pretty hard to instantiate them at large scale. So sure, this does mean you could maybe build a small local community where blackmail is easy. That’s even kind of just what social groups are, as Zvi notes; places where you can share sensitive info because you won’t be judged much, nor attacked as a norm-violator. Having that work at super-Dunbar level seems tough.