There’s a mathematical law about this. If you split something into groups, there will always be a dividing line. Move it within epsilon, and it crosses the line.
Whether or not they’re guilty may be beyond resonable doubt without whether or not its beyond resonable doubt being beyond reasonable doubt.
For example, if we define “reasonable doubt” to be < 99% chance of guilt, then if you think there’s a 99% chance of them being guilty, you’re pretty sure they’re guilty, but there’s about a 50% chance of them being convicted, based on whether the jury considers it slightly more likely or slightly less likely.
It seems that the reasonable assessment of P(guilty) should only rarely fall so close to the cut-off line that there could be serious doubt about the jury’s verdict. So if the suspense is common, that still demonstrates that the assessments of probability held by different case participants are all over the place.
It only has to be different enough that there’s a significant chance of one person saying it isn’t likely enough. Given how bad people are with extreme values of probability, this wouldn’t be that surprising.
Also, nobody has ever said where the cutoff is. Two jury members could both think there’s a 97% chance of guilt, and one thinks the cutoff is 95%, while the other thinks it’s 99%, and they’ll disagree on whether the defendant should be considered innocent or guilty.
You are right. The suspense isn’t about the probability, but about the probability times value. Where the value is the sentence or the pardon. Something big is at stake here.
When the maximal sentence is $100, then there is no big suspense no matter how far beyond reasonable doubt the case is.
This is such an important counter-point that I am disappointed that Dawkins failed to see it and that none of the posts (at least on the first page) of the original article mention it. On the plus side, this gives me evidence (although slight, since there is selection bias) that LW can go beyond traditional rationalist movements.
Dawkins starts from the premise that there is high uncertainty about the outcome of the case, and concludes that there is high uncertainty about the guilt, which does not follow. Even if it is obvious to everyone that the defendant is very probably guilty, it may be far from obvious exactly how high the jury will estimate the probability of innocence, and where they will set the bar for reasonable doubt.*
*It has never been clear to me where this should be. If I put the credence of guilt at g, should I convict when g>.9? .99? .999? Should I say “to hell with the idea of reasonable doubt anyways, I’m going to estimate myself appropriate relative weights to attach to the outcomes 1) innocent man spends a lifetime in prison, and 2) serial murderer is unleashed upon the public?” I suppose because the lawyers and judge are unlikely to provide me with a credence threshold to use, the most sensible thing to do would be to derive one myself.
There’s a mathematical law about this. If you split something into groups, there will always be a dividing line. Move it within epsilon, and it crosses the line.
Whether or not they’re guilty may be beyond resonable doubt without whether or not its beyond resonable doubt being beyond reasonable doubt.
For example, if we define “reasonable doubt” to be < 99% chance of guilt, then if you think there’s a 99% chance of them being guilty, you’re pretty sure they’re guilty, but there’s about a 50% chance of them being convicted, based on whether the jury considers it slightly more likely or slightly less likely.
It seems that the reasonable assessment of P(guilty) should only rarely fall so close to the cut-off line that there could be serious doubt about the jury’s verdict. So if the suspense is common, that still demonstrates that the assessments of probability held by different case participants are all over the place.
It only has to be different enough that there’s a significant chance of one person saying it isn’t likely enough. Given how bad people are with extreme values of probability, this wouldn’t be that surprising.
Also, nobody has ever said where the cutoff is. Two jury members could both think there’s a 97% chance of guilt, and one thinks the cutoff is 95%, while the other thinks it’s 99%, and they’ll disagree on whether the defendant should be considered innocent or guilty.
You are right. The suspense isn’t about the probability, but about the probability times value. Where the value is the sentence or the pardon. Something big is at stake here.
When the maximal sentence is $100, then there is no big suspense no matter how far beyond reasonable doubt the case is.
This is such an important counter-point that I am disappointed that Dawkins failed to see it and that none of the posts (at least on the first page) of the original article mention it. On the plus side, this gives me evidence (although slight, since there is selection bias) that LW can go beyond traditional rationalist movements.
Why is it a counterpoint? What (implicit) conclusion made by Dawkins it contradicts?
Dawkins starts from the premise that there is high uncertainty about the outcome of the case, and concludes that there is high uncertainty about the guilt, which does not follow. Even if it is obvious to everyone that the defendant is very probably guilty, it may be far from obvious exactly how high the jury will estimate the probability of innocence, and where they will set the bar for reasonable doubt.*
*It has never been clear to me where this should be. If I put the credence of guilt at g, should I convict when g>.9? .99? .999? Should I say “to hell with the idea of reasonable doubt anyways, I’m going to estimate myself appropriate relative weights to attach to the outcomes 1) innocent man spends a lifetime in prison, and 2) serial murderer is unleashed upon the public?” I suppose because the lawyers and judge are unlikely to provide me with a credence threshold to use, the most sensible thing to do would be to derive one myself.
(This sort of renormalization problem shows up a lot when trying to set up decision problems where baselines are unknown.)