The upside of this, or of “more is different” , is we don’t necessarily even need the property in the parts, or detailed understanding of the parts. And how the composition works / what survives renormalization / … is almost the whole problem.
I spent some time learning about neural coding once, and while interesting it sure didn’t help me e.g. better predict my girlfriend; I think in general neuroscience is fairly unhelpful for understanding psychology. For similar reasons, I’m default-skeptical of claims that work on the level of abstraction of ML is likely to help with figuring out whether powerful systems trained via ML are trying to screw us, or with preventing that.
The upside of this, or of “more is different” , is we don’t necessarily even need the property in the parts, or detailed understanding of the parts. And how the composition works / what survives renormalization / … is almost the whole problem.
I spent some time learning about neural coding once, and while interesting it sure didn’t help me e.g. better predict my girlfriend; I think in general neuroscience is fairly unhelpful for understanding psychology. For similar reasons, I’m default-skeptical of claims that work on the level of abstraction of ML is likely to help with figuring out whether powerful systems trained via ML are trying to screw us, or with preventing that.