It’s not that it’s wrong about the math. The math is correct. Instead, it’s wrong about the linkage between the math and the reality. The math uses certain assumptions that will turn out not to be true in reality, and without those assumptions the math doesn’t hold.
This is problem arises multiple time in rationalism. For instance, Auman’s agreement theorem is true of idealised agents communicating using an idealised notion of “information” ,… and if those assumptions hold, then it’s true. But it is not true of realistic agents who can’t even agree what “evidence” is. It is therefore a useful rationality skill to be able to spot the anti-pattern, rather just naively assume that “mathematically correct” is the same thing as “real world correct”.
The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don’t count brute force approaches like AIXI for the same reason I don’t consider physics a simple yet powerful description of biology).
The idea that there is an “ideal” decision theory.
The idea that AGI will very likely be an “agent”.
The idea that Turing machines and Kolmogorov complexity are foundational for epistemology.
The idea that, given certain evidence for a proposition, there’s an “objective” level of subjective credence which you should assign to it, even under computational constraints.
The idea that Aumann’s agreement theorem is relevant to humans.
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.)
This is problem arises multiple time in rationalism. For instance, Auman’s agreement theorem is true of idealised agents communicating using an idealised notion of “information” ,… and if those assumptions hold, then it’s true. But it is not true of realistic agents who can’t even agree what “evidence” is. It is therefore a useful rationality skill to be able to spot the anti-pattern, rather just naively assume that “mathematically correct” is the same thing as “real world correct”.
(There’s a fuller list of examples here: https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality
The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don’t count brute force approaches like AIXI for the same reason I don’t consider physics a simple yet powerful description of biology).
The idea that there is an “ideal” decision theory.
The idea that AGI will very likely be an “agent”.
The idea that Turing machines and Kolmogorov complexity are foundational for epistemology.
The idea that, given certain evidence for a proposition, there’s an “objective” level of subjective credence which you should assign to it, even under computational constraints.
The idea that Aumann’s agreement theorem is relevant to humans.
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.)
The idea that decision time does not affect possible outcomes.
It’s not an idea, it’s not explicit(, that X does not affect Y). It’s the absence of recognition (that X does affect Y).