The problem of the criterion and Gödel’s incompleteness theorems place limits on what can be known through observation and reason alone.
I’d argue that it doesn’t impose limits on what can be known in the general case, though. Observation and reason are two functions which we use to find things out, like Peano arithmetic is a function used to actuate logic. But we have better (ie. larger) functions now, like ZFC. We can build larger models of reason, and we can get better at observation, which appears to imply that incompleteness merely shows that as long as we are unable to build an infinitely complex reasoning / observational system, we will miss something that lies outside of the explanatory powers of said system.
That being said, I don’t think it’d be possible to build an infinitely large system of reasoning.
I’d argue that it doesn’t impose limits on what can be known in the general case, though.
This is why I say “alone”, although...
That being said, I don’t think it’d be possible to build an infinitely large system of reasoning.
Right, in a practical sense, even if we could in theory build an system capable of infinite hypercompute that could some how work out how to know everything, such a system is physically impossible to build for a variety of reasons, and even more mundane ideal reasoners like AIXI also face the same problem.
I’d argue that it doesn’t impose limits on what can be known in the general case, though. Observation and reason are two functions which we use to find things out, like Peano arithmetic is a function used to actuate logic. But we have better (ie. larger) functions now, like ZFC. We can build larger models of reason, and we can get better at observation, which appears to imply that incompleteness merely shows that as long as we are unable to build an infinitely complex reasoning / observational system, we will miss something that lies outside of the explanatory powers of said system.
That being said, I don’t think it’d be possible to build an infinitely large system of reasoning.
This is why I say “alone”, although...
Right, in a practical sense, even if we could in theory build an system capable of infinite hypercompute that could some how work out how to know everything, such a system is physically impossible to build for a variety of reasons, and even more mundane ideal reasoners like AIXI also face the same problem.