I think the answer to the question “are there irreducibly complex statistical models?” is yes.
I agree that there are some sources of irreducible complexity, like ‘truely random’ events.
To me, the field of cognition does not pattern-match to ‘irrecducibly complex’, but more to ‘We don’t have good models. Yet, growth mindset’. So, unless you have some patterns where you can prove that they are irrreducible, I will stick with my priors I guess. The example you gave me,
For a very simple example, if you’re trying to fit a continuous curve based on a finite number of data points, you can make the problem arbitrarily hard with functions that are nowhere differentiable.
falls squarely in the ‘our models are bad’-category, e.g. the Weierstrass function can be stated pretty compactly with analyitic formulas.
But also, of course I can’t prove the non-existence of such irreducible, important processes in the brain.
and my answer there is more like “I don’t know, but I could believe so.”
Ah, how you think about that example helps clarify. I wasn’t even thinking about the possibility of an AI that could “learn” the analytic form of Weierstrass function, I was thinking about the fact that trying to fit a polynomial to it would be arbitrarily hard.
Obviously “not modelable by ANY means” is a much stronger claim than “if you use THESE means, then your model needs a lot of epicycles to be close to accurate.” (Analyst’s mindset vs. computer scientist’s mindset; the computer scientist’s typical class of “possible algorithms” is way broader. I’m more used to thinking like an analyst.)
I think you and I are pretty close to agreement at this point.
I agree that there are some sources of irreducible complexity, like ‘truely random’ events.
To me, the field of cognition does not pattern-match to ‘irrecducibly complex’, but more to ‘We don’t have good models. Yet, growth mindset’. So, unless you have some patterns where you can prove that they are irrreducible, I will stick with my priors I guess. The example you gave me,
falls squarely in the ‘our models are bad’-category, e.g. the Weierstrass function can be stated pretty compactly with analyitic formulas.
But also, of course I can’t prove the non-existence of such irreducible, important processes in the brain.
Fair enough.
Ah, how you think about that example helps clarify. I wasn’t even thinking about the possibility of an AI that could “learn” the analytic form of Weierstrass function, I was thinking about the fact that trying to fit a polynomial to it would be arbitrarily hard.
Obviously “not modelable by ANY means” is a much stronger claim than “if you use THESE means, then your model needs a lot of epicycles to be close to accurate.” (Analyst’s mindset vs. computer scientist’s mindset; the computer scientist’s typical class of “possible algorithms” is way broader. I’m more used to thinking like an analyst.)
I think you and I are pretty close to agreement at this point.
Yes, I completely agree with the weaker formulation “irreducible using only THESE means”, like e.g. Polynomials, MPTs, First-Order Logic etc.