But even if our current formal understanding of reasoning is incomplete, we know it’s not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren’t fighting each other, they’re collaborating on determining what your map should look like and how confident you should be.
Yeah. So if one guy presents only evidence in favor, and the other guy presents only evidence against, they’re adversaries. One guy can state a theory, show that all existing evidence supports it, and thereby have “proved” it, and then the other guy can state an even better theory, also supported by all the evidence but simpler, thereby overturning that proof.
Why would we build AGI to have “pet truths”, to engage in rationalization rather than rationality, in the first place?
Yeah. So if one guy presents only evidence in favor, and the other guy presents only evidence against, they’re adversaries. One guy can state a theory, show that all existing evidence supports it, and thereby have “proved” it, and then the other guy can state an even better theory, also supported by all the evidence but simpler, thereby overturning that proof.
We wouldn’t do it on purpose!