I’d like to add that the high “burden of proof” in this case comes from both
(1) the low prior probability of guilt in this case; and
(2) the high probability threshold that the court generally demands (“beyond a reasonable doubt”) before it will condemn the defendants. If we wanted to bring in decision theory, we would assign a lot of disutility to a wrongful conviction. This determines what “likely” and “unlikely” mean in this context.
The prior for any given person being guilty is on the ‘one in a million’ order of magnitude, but the courts are probably closer to 1 in 10 on the margin (wild ass guess). If you translate “beyond reasonable doubt” to 99% or 99.9%, that still might translate down to 90% once you take into account overconfidence.
From looking at this example, it certainly doesn’t look like the algorithm used by the court system has an innocent to guilty ratio of anywhere near as low as 1 in a million on the marginal cases.
I’d like to add that the high “burden of proof” in this case comes from both
(1) the low prior probability of guilt in this case; and
(2) the high probability threshold that the court generally demands (“beyond a reasonable doubt”) before it will condemn the defendants. If we wanted to bring in decision theory, we would assign a lot of disutility to a wrongful conviction. This determines what “likely” and “unlikely” mean in this context.
The former dwarfs the latter.
The prior for any given person being guilty is on the ‘one in a million’ order of magnitude, but the courts are probably closer to 1 in 10 on the margin (wild ass guess). If you translate “beyond reasonable doubt” to 99% or 99.9%, that still might translate down to 90% once you take into account overconfidence.
From looking at this example, it certainly doesn’t look like the algorithm used by the court system has an innocent to guilty ratio of anywhere near as low as 1 in a million on the marginal cases.
It’s a bit of an ‘Einstein’s arrogance’ thing