There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)
Edit: “waste of time” is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind’s well-being. The research would be fun, so it’s not wasted time. Level of disagreement should be higher too—say ~95%.
Komponisto makes a strange assertion. The prior is not the reference that “someone would commit murder”—there is a body. A more appropriate prior is “someone who lives with someone who was murdered committed that murder”—I’m guessing that base probability is of the order of 0.1. Once we take into account that AK and MK aren’t in a relationship, AK is female, and there is very strong evidence that someone else committed the murder then I’d agree that the probability drops, but these pieces of evidence don’t cancel out leaving us with the original prior—the final probability may be higher or lower.
Also the “complexity penalty on the prosecution’s theory of the crime is enormous”—that may mean the case was flawed, but it’s not evidence she didn’t kill MK unless you are willing to give some weight to the conviction (at <0.001, I assume you are not). Or to put it another way, even if the prosecution is completely wrong you cannot set the probability of guilt to 0. This is like assuming AK is guilty because her parents criticized the Italian legal system.
Overall I hope I am a bit more cautious about my abilities than you. In the first half you explain why you, as a human being, cannot be trusted to be rational. Then you set out your case. Why should I trust your rationality, but not others’?