My take on complex systems theory is that it seems to be the kind of theory that many arguments proposed in favor of would still give the same predictions until it is blatantly obvious that we can in fact understand the relevant system. Results like chaotic relationships, or stochastic-without-mean relationships seem definitive arguments in favor of the science, though these are rarely posed about neural networks.
Merely pointing out that we don’t understand something, that there seems to be a lot going on, or that there exist nonlinear interactions imo isn’t enough to make the strong claim that there exist no mechanistic interpretations of the results which can make coarse predictions in ways meaningfully different from just running the system.
Even if there’s stochastic-without-mean relationships, the rest of the system that is causally upstream from this fact can usually be understood (take earthquakes as an example), and similarly with chaos (we don’t understand turbulent flow, but we definitely understand laminar, and we have precise equations and knowledge of how to avoid making turbulence happen when we don’t want it, which I believe can be derived from the fluid equations). Truly complex systems seem mostly very fragile in their complexity.
Where complexity shines most brightly is in econ or neuroscience, where experiments and replications are hard, which is not at all the case in mechanistic interpretability research.
I have downvoted my comment here, because I disagree with past me. Complex systems theory seems pretty cool from where I stand now, and I think past me has a few confusions about what complex systems theory even is.
I have re-upvoted my past comment, after looking more into things, I’m not so impressed with complex systems theory, but I don’t fully support it. Also, past me was right to have confusions about what complex systems theory is, but still judge it, as it seems complex systems theorists don’t even know what a complex system is.
My take on complex systems theory is that it seems to be the kind of theory that many arguments proposed in favor of would still give the same predictions until it is blatantly obvious that we can in fact understand the relevant system. Results like chaotic relationships, or stochastic-without-mean relationships seem definitive arguments in favor of the science, though these are rarely posed about neural networks.
Merely pointing out that we don’t understand something, that there seems to be a lot going on, or that there exist nonlinear interactions imo isn’t enough to make the strong claim that there exist no mechanistic interpretations of the results which can make coarse predictions in ways meaningfully different from just running the system.
Even if there’s stochastic-without-mean relationships, the rest of the system that is causally upstream from this fact can usually be understood (take earthquakes as an example), and similarly with chaos (we don’t understand turbulent flow, but we definitely understand laminar, and we have precise equations and knowledge of how to avoid making turbulence happen when we don’t want it, which I believe can be derived from the fluid equations). Truly complex systems seem mostly very fragile in their complexity.
Where complexity shines most brightly is in econ or neuroscience, where experiments and replications are hard, which is not at all the case in mechanistic interpretability research.
I have downvoted my comment here, because I disagree with past me. Complex systems theory seems pretty cool from where I stand now, and I think past me has a few confusions about what complex systems theory even is.
I have re-upvoted my past comment, after looking more into things, I’m not so impressed with complex systems theory, but I don’t fully support it. Also, past me was right to have confusions about what complex systems theory is, but still judge it, as it seems complex systems theorists don’t even know what a complex system is.