Think of it from the perspective of Artificial Intelligence. Suppose you were writing a computer program that would, if it heard a burglar alarm, conclude that the house had probably been robbed. Then someone says, “If there’s an earthquake, then you shouldn’t conclude the house was robbed.” This is a classic problem in Bayesian networks with a whole deep solution to it in terms of causal graphs and probability distributions… but suppose you didn’t know that.
Perhaps it’s not surprising that the solution to this problem is “deep”, considering that the human brain fails to reliably implement it. Indeed, this is basically the bug responsible for the Amanda Knox case, with “Rudy Guede did it” being analogous to “there was an earthquake”.
A tangential remark:
Perhaps it’s not surprising that the solution to this problem is “deep”, considering that the human brain fails to reliably implement it. Indeed, this is basically the bug responsible for the Amanda Knox case, with “Rudy Guede did it” being analogous to “there was an earthquake”.