Complex systems research as a field (and its relevance to AI Alignment)

habryka

I have this high prior that complex-systems type thinking is usually a trap. I’ve had a few conversations about this, but still feel kind of confused, and it seems good to have a better written record of mine and your thoughts here.

At a high level, here are some thoughts that come to mind for me when I think about complex systems stuff, especially in the context of AI Alignment:

  • A few times I ended up spending a lot of time trying to understand what some complex systems people are trying to say, only to end up thinking they weren’t really saying anything. I think I got this feeling from engaging a bunch with the Santa Fe stuff and Simon Dedeo’s work (like this paper and this paper)

  • A part of my model of how groups of people make intellectual progress is that one of the core ingredients is having a shared language and methodology that allows something like “the collective conversation” to make incremental steps forward. Like, you have a concept of experiment and statistical analysis that settles an empirical issue, or you have a concept of proof that settles an issue of logical uncertainty, and in some sense a lot of interdisciplinary work is premised on the absence of a shared methodology and language.

  • While I feel more confused about this in recent times, I still have a pretty strong prior towards something like g or the positive manifold, where like, there are methodological foundations that are important for people to talk to each other, but most of the variance in people’s ability to contribute to a problem is grounded in how generally smart and competent and knowledgeable they are, and expertise is usually overvalued (for example, it’s not that rare for a researcher to win a Nobel prize in two fields). A lot of interdisciplinary work (not necessarily complex systems work, but some of the generator that I feel like I see behind PIBBS) feels like it puts a greater value on intellectual diversity here than I would.

Nora_Ammann

Ok, so starting with one high-level point: I’m definitely not willing to die on the hill of ‘complex systems research’ as a scientific field as such. I agree that there is a bunch of bad or kinda hollow work happening under the label. (I think the first DeDeo paper you link is a decent example of this: feels mostly like having some cool methodology and applying it to some random phenomena without really an exciting bigger vision of a deeper thing to be understood, etc.)

That said, there are a bunch of things that one could describe as fitting under the complex systems label that I feel positive about, let’s try to name a few:

  • I do think, contra your second point, complex systems research (at least its better examples) have a lot of/​enough shared methodology to benefit from the same epistemic error correction mechanisms that you described. Historically it really comes out of physics, network science, dynamical systems, etc. The main move that happened was to say that, rather than indexing the boundaries of a field on the natural phenomena or domain it studies (e.g. biology, chemistry, economics), to instead index it on a set of methods of inquiry, with the premise that you can usefully apply these methods across different types of systems/​domains and gain valuable understanding of underlying principles that govern these phenomena across systems (e.g. scaling laws shaping biological organisms as well as the growth of cities, etc.)

  • I think a (typically) complex systems angle is better at accounting for environment-agent interactions. There is a failure mode of naive reductionism that starts by fixing the environment to be able to hone in on what system-internal differences produce what differences in the phenomena, and then conclude that all of what drives the phenomena is systems-internal while forget that what they did earlier is artificially fixed the environment in order to reduce the complexity of the problem at hand. It’s fine and practically useful often to fix one part of the equation of complex interactions, but you shouldn’t forget that that’s what you did along the way. Similarly, the complex systems lens tends to be better at paying attention to interactions across levels of abstraction, and the dynamics that emerge from these interactions, which also seem valuable to understanding natural phenomena.

habryka

to instead index it on a set of methods of inquiry, with the premise that you can usefully apply these methods across different types of systems/​domains and gain valuable understanding of underlying principles that govern these phenomena across systems (e.g. scaling laws shaping biological organisms as well as the growth of cities, etc.)

Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.

habryka

Hmm, I guess my thoughts on complex systems stuff then kind of branch into two directions:

Where are the wins? Do we have any success story for this methodology working well? I used to be a fan of network science, but then kind of bounced off of it. I like physics, though physics itself is so large, and has a lot of dysfunction in it, that it really matters which part of the physics methodology is imported.

Ok, so what is the foundation? It does seem kind of like I don’t have a good inside view of the methodology of this field. Maybe I should just go on Wikipedia and read the summary of the methodology there.

Nora_Ammann

A lot of interdisciplinary work (not necessarily complex systems work, but some of the generator that I feel like I see behind PIBBS) feels like it puts a greater value on intellectual diversity here than I would.

Keen to say something about the type of epistemic pluralism that I care about (among others in the context of PIBBSS).

(Generally speaking, I think the “general smarts” concern feels pretty orthogonal to how I am thinking about epistemic pluralism, and it at least feels to me like I’m not forced to make trade offs between the two. We could separately double click on that if you like, but let me first try to argue why I think they are orthogonal in the first place.)

I think one relevant premise for PIBBSS style work (which it shares with the complex systems lens at least as I’ve framed it above) is some assumption that there are some underlying principles that govern intelligent behvaviour across different systems, substrates and scales. If that is so, that is one approach to dealing with the problem that we don’t have direct epistemic access to the sorts of AI systems we’re most worried about. But if we think they share some features/​principles with other types of systems that implement intelligent behaviour, importantly systems we do have a better degree of epistemic access to, we can now start to triangulate between what we understand about such different systems. This triangulation allows you to gain some more robust insights into those principles that that are substrate/​scale/​system-agnostic.

habryka

I overall feel pretty sympathetic and interested in studying intelligent behavior in the systems we have. However, I do notice that somehow I can’t think of any work in this space that’s felt very useful to me, at least in recent years. I really like Eliezer’s analysis of evolution as an analogue to AI alignment, and it had a big effect on me. And I like Steve Byrnes’ work on studying neuroscience to get insight into the AI Alignment problem, though they feel separate somehow (but that might genuinely just be me gerrymandering), and in as much as the goal is to produce more work like the original LW sequences analysis of evolution and AI Alignment, I would feel pretty excited.

Nora_Ammann

On the foundations, roughly, my feeling is that there are different angles to go about this. Generally I feel a bit hesitant about the frame of “let me learn about complex systems science”—like I think that concept isn’t really the most useful way of carving the world (e.g. I think reading the wikipedia page on this will be not that exciting). I do think there are some complex systems textbooks that are moderately neat if you’re looking for some ideas of how you could model different types of systems, and pointers at the math you need for that. But beyond that, at least according to my taste, I’d say think about what natural phenomena you’re interested in understanding, and then try find who is doing interesting work on that phenomena, or what ways of modelling it (mathematically) seem productive. My experience I guess is closer to a) interested in understanding certain phenomena, b) finding some work out there that I found productive to engage with, c) noticing that a bunch of that can be labelled complex systems stuff.

habryka

Well, in this case I am asking the specific question of “in as much as there is a field here, what is its methodology?”. I do a lot of studying of natural phenomena and am generally searching for good mathematical models, but I rarely end up finding things that are labeled as “complex systems”. I usually just like, end up studying biology or physics or AI or math.

habryka

Here is the specific quote of yours that I was thinking about:

I do think, contra your second point, complex systems research (at least its better examples) have a lot of/​enough shared methodology to benefit form the same epistemic error correction mechanisms as you described. Historically it really comes out of physics, network science, dynamical systems, etc.

Which sounds to me like you are implying there is a field here that has an epistemic ratchet that can click forward and make coherent progress over time. But I currently feel like I don’t have a good pointer at the mechanism of that ratchet.

Nora_Ammann

Do you know this textbook? I’d say it’s a good overview of the “complex systems modelling toolbox”.

If you want a somewhat spicier, or maybe more ambitious vision of what complex systems is about, you could listen to this interview with David Krakauer. My guess is you’d largely bounce off of it, though I do think it’s pretty exciting (albeit the interview is denser than it appears to be at first glance). He talks about understanding “telic” phenomena (or some similar terminology), which (my rough paraphrase) he understands as emerging from the specific constraints that you get from adaptive systems that evolve and meta-evolve, etc. IMO this is interesting from an “understanding the foundations of agency/​intelligent behavior” angle, and e.g. you end up trying to explain in naturalistic terms how things like “back-wards causation” characteristic to agency/​planning can arise from simple dynamics.

In terms of systematic progress, for one, I think that progress is integrated with other scientific fields too—like complex systems as a field cuts across the more traditional ways of carving scientific fields so I don’t think there is an a priori way to attributing progress to either one of them exactly. But I think the mechanisms of progress come from an interplay between [a toolbox of mathematical models (e.g. network science, dynamical systems, control theory, etc.)] and moving between the more abstract and more concrete/​empirical.

Maybe I’m being too conflationary today, but I think that is just the same story as in all other scientific fields, and the main differences is in some of the underlying premises. Maybe the cleanest example is the move from classical economic theories to complexity economics. In the former, you start from a set of assumptions like: rational actor, all your agents are the same, markets are equlibrium systems. And then complexity economics comes along and says: hey guys, good news, we have better math tools now than just arithmetic, so we are now able to relax some of our classical assumptions and can e.g. model economic systems with premises such as: bounded rational agents (with learning & memory dnyamics, etc.), heterogenous agents (e.g. different learning strategies), markets as out of eqiulibiria systems.

habryka

Do you know this textbook? I’d say it’s a good overview of the “complex systems modelling toolbox”.

I don’t! I might take a look.

And then complexity economics comes along and says: hey guys, good news, we have better math tools now than just arithmetic, so we are now able to relax some of our classical assumptions and can e.g. model economic systems with premises such as: bounded rational agents (with learning & memory dnyamics, etc.), heterogeneous agents (e.g. different learning strategies), markets as out of equilibria systems.

Yeah, so this is definitely the kind of thing that sounds like a cool thing to do, but also, does it actually work? Like, I am not that grounded in economics, but I do read a lot of econ bloggers and think a bunch about economics in my own language, and I don’t come across the complexity economics perspective a lot, and indeed kind of expect that if I were to reading one of the “top” paper on complexity economics, I would end up feeling disappointed. But I might also just be lacking references here.

This stands, for example, in contrast to something like cliometrics/​econometric history, which I found really valuable, and which has a lot of cool models about how history works, but doesn’t feel very complexity-science shaped to me.

Nora_Ammann

However, I do notice that somehow I can’t think of any work in this space that’s felt very useful to me

I partially agree with you and wish I could point to more and easily legible examples. (At the same time, I don’t feel like I have very many examples I find particularly exciting more broadly.)

A few non-comprehensive pointers to more current work:

  • Hierarchical agency/​alignment work, amongst other things discussed/​worked on by ACS

  • Developing naturalized accounts of intelligent phenomena (e.g. agency, planning, deception, power seeking, mesaoptimisation), where a naturalized account is meant to characterise the underlying mechanisms of a phenomena such that you can identify it also when it occurs at (temporal, spatial) scales you haven’t evolved to recognize the phenomena—with the hope that this can provide more robust ways to do e.g. interpretability and evals

  • Coming to have a more principled understanding of interacting AI systems, e.g. what evolutionary/​emergent dynamics from having a bunch of LLM sytems interact with each other in the wild (e.g. prompt evolution, emergence of scaffolded agents with different capability profiles, etc.)

  • Characterising “messy” AI risk scenarios, e.g. multiple transitions, RAAP, multi-multi delegation, ascended economy

habryka

So, the Wikipedia article on complexity economics says:

The economic complexity index (ECI) introduced by Hidalgo and Hausmann[6][7] is highly predictive of future GDP per capita growth. In Hausmann, Hidalgo et al.,[7] the authors show that the List of countries by future GDP (based on ECI) estimates ability of the ECI to predict future GDP per capita growth is between 5 times and 20 times larger than the World Bank’s measure of governance, the World Economic Forum’s (WEF) Global Competitiveness Index (GCI) and standard measures of human capital, such as years of schooling and cognitive ability.[8][9]

And like, I don’t know, that sounds cool, but also my honest best guess is that this is fake somehow? Like, if I look into this I will find that “past GDP per capita growth” is a better predictor than this economic complexity index, or something as straightforward as that, and the only reason why they can claim this result is because they gerrymandered the alternative somehow.

habryka

Ok, I googled around a bit, and I can’t find any obvious takedown that exposes the ECI as being obviously gerrymandered, and Our World in Data (who generally seem reasonable and like they think about this stuff in a cool way) have a favorable article on it on their blog, so I update that there is something more real here.

Nora_Ammann

Yeah, I definitely share some confusion of the visible successes being less than what my model would have predicted.

This makes me update down a bit on the overall promise of the approach, but I also have uncertainty over other parts of my model, e.g. what success I would expect at what timescales and what “success” would look like. Also I think there are dynamics like “once a thing gets successful it gets more integrated into the mainline and thus less recognisable as the (once) unorthodox approach”. Definitely expect some of this to be happening. I know they had some success in modeling e.g. financial crises by dropping the “markets are equilibrium systems”assumption; I remember reading some report (I believe of some international governance body, OECD or the like, but can’t remember the details) with economic recommendations around climate change and sustainablility that were very clearly complexity economics inspired that sounded pretty reasonable, and I know a little bit about e.g. this work by Hidalgo which seemed pretty cool based on a relatively shallow look (in part also because it makes a bunch of really cool real-world data accessible).

Nora_Ammann

Hmm, on the economic index stuff: I mean one simple perspective on this all is something like

  • surely the thing that is really going on in the territory is extremely complex (like, there are historical path dependencies, there is the global economy that the national economy is embedded in, the country’s specific resources, infrastructure, work force, etc. etc.)

  • it also surely seems like basically all classical economic models simplify reality A LOT, and that unorthodox approaches are making some of these assumptions more realistic

  • the question is how much the complexification of your model (or that particular complexificiation) buys you in terms of predictive power relative to what you pay in terms of complexity costs

  • One more thing that is weird about a domain like economics is that the economic theorising happens within (and thereby affects) the systems it’s trying to predict. Like, when the World Bank issues some predictions, that itself affects e.g. investment flows into the country, interests on deposits etc. This makes economics (among other fields) importantly different from physics where in most cases you are in fact justified to ignore the fact that the theorisers are in the theory.

habryka

One more thing that is weird about a domain like economics is that the economic theorising happens within (and thereby affects) the systems it’s trying to predict. Like, when the World Bank issues some predictions, that itself affects e.g. investment flows into the country, interests on deposits etc. This makes economics (among other fields) importantly different from physics where in most cases you are in fact justified to ignore the fact that the theorisers are in the theory.

I agree that there is some reality to this, but I do think this is the kind of effect that feels like it’s selected to feel clever, or meta, but doesn’t actually matter? Like, I agree that the Fed of course has some effect on what the economy does by saying what it will do, but I find myself skeptical that you have to really take into account the effects of how people will change their behavior after you publish your economic theory, in addition to just like modeling what interest rates the Fed is setting as a target.

Like, I am not denying there is some effect here, but I doubt that it will be large.

This is importantly different from situations where someone makes an empirical observation, and that empirical observation turns out to either be the result of a human-enforced policy, or has turned into a human-enforced policy because of its regularity. For example, I find the story that Moore’s law derived a bunch of its robustness from the fact that major semiconductor manufacturers set their internal targets according to Moore’s law, kind of promising and interesting.

But that feels different than saying that you need to take into account the self-referential effects of actors in the economy taking your economic theory seriously.

Nora_Ammann

My best guess currently is that it does matter a great deal (though at slightly larger timescales than on the year-to-year prediction say). Like, I think the path dependency of history and social systems (the path dependency of everything that is subject to some form of differential selection) is a big deal. I feel very interested in e.g. what alternative functional economic logics there are, and pretty saddened by the fact that given the reality of physics we might never be able to explore them in this branch. This stuff is definitely scientifically difficult to deal with because it’s about counterfactuals we cannot access, so it’s hard or maybe impossible to even be calibrated on whether it’s a big deal or not.

Nora_Ammann

And I guess this is a good example of where my intuitions are influenced by a complex systems lens, compared to my guess of some other people’s intuitions.

The way I think about the degree of how much path dependency matters here is roughly: one pull is that, if you have some relatively simple complex dynamic, small differences in initial conditions can propagate a great deal, local contingencies make you access different parts of the possibility tree, sometimes in ways that are very hard to reverse. The pull from the other side is if you can point to some mechanisms that actively buffer against such path-dependencies, e.g. some sort of homeostatic pressure that tends to keep the systems within some basin. Both of these mechanisms exist—overall I expect socio-economic history to be shaped more by the former (where e.g. the developmental/​life period of a biological organisms is more shaped by the latter).

habryka

I mean, I am totally fine saying “initial conditions matter a lot, some systems are chaotic, it’s hard to predict where they will end up because they are quite sensitive”. But that’s different from saying “specifically analyzing the self-referential nature of economic theories and policies is worth the bang for your complexity buck”. Like, I don’t expect that the system will stop being chaotic as soon as you account for that self-referential effect, nor do I expect that the system is chaotic because of the self-referential component that your publication would introduce.

Nora_Ammann

Yeah okay so I agree with you that I expect the effect to be bigger (at the year to decade timescale) for technology design than it is for macroeconomics. (I don’t think so much take this as saying that the effect doesn’t apply to macroeconomics, and rather that macroeconomics lives at slower time scales. Like, the lens of this sort of path-dependency applies to economic logics, but more so at the decades to centuries time scale.)

habryka

Like, as an example, I think all the basics of microeconomics don’t have much of a self-referential effect. The domain of their validity, and their predictions seem robust to that kind of stuff. Supply will equal demand, no matter whether people know about supply and demand. Wages will be sticky, no matter whether people know about wages being sticky (there is probably some effect here, but I think it’s very weak).

Nora_Ammann

“specifically analyzing the self-referential nature of economic theories and policies is worth the bang for your complexity buck”. Like, I don’t expect that the system will stop being chaotic as soon as you account for that self-referential effect, nor do I expect that the system is chaotic because of the self-referential component that your publication would introduce.

Yeah to be clear I agree with this!

Nora_Ammann

Maybe a nearby example that is interesting and we might disagree on more is: there is at least some reading of history where notions of “economic rationality” and the various notions of rationality linked up to the field of game theory emerged during the post-WW2 period, and that those ideas have importantly and manifestly shaped economic, institutional and academic/​intellectual developments. This view says something like: a bunch of ideas that seem very natural to (most of) us today are pretty strongly contingent on a pretty narrow time period and a pretty small number of heads shaping these ideas. Related to a sense of “the fish in the water” and “you can never really escape your local ideological context”.

[Not sure it’s worth going down this rabbit hole.]

Nora_Ammann

Supply will equal demand, no matter whether people know about supply and demand.

FWIW, largely agree with that paragraph but maybe worth pointing out: supply equals demand does fail sometimes (e.g. financial crises). We can construct these failures as some sort of “irrationality” in the system, but I find that generally pretty intellectually lazy. It seems important to recognize the limits of a model, and why those limits arise. The fact that complexity economics says they are better at understanding what really happens during things like financial crises I think should earn them a good deal of epistemic points (with the important caveat that I would need to read up on what exactly they were able to show with respect to modelling markets as out-of-equilibria systems, so I am making this argument with an epistemic caveat)

habryka

Yeah, I do think I disagree with this, and it is the kind of thing that does scare me about complexity theory. Like, maybe this is a strawman, but there is an attractor for sciences where the authors of that science start viewing themselves not as reporters on the truth, but as people who view themselves as advocates for a certain way of social organization.

History is full of this, and most of it is a terribly diseased field because of it, because really very large fractions of it view themselves as intentionally trying to reframe history in a way that positively affects the workings of society today.

And like, I am not in favor of banning any and all discussion of the secondary effects of a publication, and how a publication itself might distort the subject-matter that it is talking about, but overall there are many skulls along this road, and many fields have died because of it.

And I don’t know to what degree that is going on when you are talking about studying self-referentiality in the publication and adoption of economic theories, but it feels like it gets close to it.

Nora_Ammann

Yeah, I do think I disagree with this, and it is the kind of thing that does scare me..

Yep, overall strong agree with the entire message. I find myself pretty torn where, on one hand, there are some basic arguments here that I find pretty compelling and make me want to take seriously the social embededness of theorizing—especially in the context of AI alignment --, and I also hard agree with the skulls and the slippery epistemic slopes.

I do feel like there is a way to take some of these basic insight onboard in a way that doesn’t turn all of your reasoning intellectually fraught, but it sure feels like a tricky balance, and in my experience it feels a bit like people have more or less “epistemic antibodies” for being able to navigate that terrain more or less safely.

(As a side note: I’ve written/​gave some talks on this, and if you were to read/​watch them and wanted to let me know whether it made you update towards being more or less worried I end up stumbling too close to the skulls, I’d be keen to hear that.)

(Also, I think this paper has an interesting philosophical discussion on this issue.)

habryka

So, going back a bit more to the top-level. I definitely am on board with something like “man, it does sure really seem like formal models we construct of various societal, intelligent and economic systems seem very unlikely to capture these models at the relevant level of detail and complexity that is necessary to actually make good predictions here”.

And then I am pretty into figuring out how to do better. I guess my current answer is something like “well, that’s not the job of science, the job of science is to provide a certain type of relatively narrow intellectual input into society’s reasoning. The actual job of aggregating and making predictions about large complex systems will mostly happen in the System 1 of a bunch of decision-maker brains, based on a really complicated and messy mixture of inductive and deductive reasoning that we don’t really understand”.

habryka

I also somewhat think that deep learning is now good enough that we can probably understand a bunch of systems in the world better by just throwing some large neural nets at them. They definitely have enough parameters and can encode enough simultaneous considerations to give rise to pretty good predictive models of very complicated systems.

And by doing it via artificial learning systems we have to deal with less of the recursive issues, can control the inputs, and maintain a clearer abstraction of a science (which e.g. can do things like reproduce predictions about complex systems by rerunning a DL training run, which you can’t do if you are predicting complex systems by giving information to policy makers, who generally don’t appreciate being put into large controlled experiments, or being terminated and then rerun)

Nora_Ammann

Ok I’d want to push back some on what you said about your “current answer for doing better”. First, I think it’s bad to ignore that science has aspects to it that are not purely descriptive.* I also agree that we really shouldn’t go all the way in the other directions, where science becomes basically just an extension of politics. Second, depending on the context, leaving it to the “S1 of a handful of decision makers” seems clearly unsatisfying to me.

*feels worth saying here: we can be somewhat differentiated about how much this is true for what domains/​systems. Simon Herbert’s The Sciences of the Artificial has been really influential on me with respect to this. The artificial here refers to ~designed artifacts—importantly for our context: technology and institutions. The domain of the artificial has this weird descriptive-prescriptive dual nature.

Nora_Ammann

I agree we can leverage LLMs for scientific progress here (which is something that s also part of e.g. Davidad’s OAA mega-plan), though it’s not gonna be easy and there are important ways we could fuck it up. For example, in Davidad’s case (AFAIK) the idea is to get LLMs to write formal models that then get checked line by line by human experts. This is the way in which LLMs can help augment scientific modeling at the moment, but the chceking stage is critical to this being useful rather than harmful.

I don’t however see how AI systems inherently face less of the recursiveness issue, at least not by default. In fact, I’m pretty worried about performative predictors.

habryka

Yeah, to be clear, I am not at all into ignoring it. I am saying that for any given paper that is trying to illuminate our understanding of some domain, the vast majority of those papers should mostly ignore these phenomena, bar a few particularly recursive domains. And then as scientists and people who build scientific institutions, we should think about how we can set up processes of inquiry that don’t have to deal with this on an ongoing basis, or at least limit the corruption of the relevant forces.

And yeah, having some science that helps us make those tradeoffs isn’t crazy, but I feel like I would want those papers to tackle the relevant considerations directly, instead of somehow having each paper end up being occupied by concerns about its self-referential effects.

habryka

I don’t however see how AI systems inherently face less of the recursiveness issue, at least not by default. In fact, I’m pretty worried about performative predictors.

Well, in the case of AI systems you can study and put bounds on the effects of self-referentiality. You can retrain the old system with the same data. You can objectively talk about what predictions it makes. With humans using their system I to predict complex systems you have so much path dependence in each individual that its approximately impossible to control things.

Nora_Ammann

Yeah interesting. I would agree that the level of abstraction of a scientifc paper (at least in vast majority of cases) is not where the recursiveness probelm should be addressed. Which does raise the (improtant) question of where and how exactly the problem should then be addressed. I’m not entirely sure. I guess institutions, or the scientific community as a whole is closer to the right abstraction. I also think that this is where philosophy of science has a relevant complementary role to play. The thing feels a bit different for the engineering domains. In particular, I’d definitely want there to be more talk about alternative AI paradigms (and the safety and governance related features of different such paradigms). Given the current landscape, I actually believe that is among the more promising interventions at the moment (with some inital positive signs having started to occured more recently).

habryka

Which does raise the (important) question of where and how exactly the problem should then be addressed. I’m not entirely sure.

Well, in some sense that’s what LessWrong is about. “The Art of Rationality”.

It’s called an art because indeed it is not a science. It’s a more extensive category that in addition to covering the truth-discovering ways of science, also tries to cover much more practical things, like good cognitive habits and intuitions and making sure you eat the right food, and so on. I also think it’s good to have some good old science on this topic, but i.e. I find MIRI’s work on transparent game theory more relevant here than most complexity science at trying to study things like recursive modeling effects.

Nora_Ammann

Yeah agree that epsitemic communities matter here, and that there is more to truth seeking than just the bare bones ‘scientific method’.

habryka

Ok, seems like it’s probably about time to wrap up. I enjoyed this!

Nora_Ammann

Yeah, same! :) Thanks!

habryka

Summarizing where we are leaving things a bit for me:

  • I am still pretty interested in learning more about wins of complexity theory

  • I feel pretty on board with some of the basic premises of complexity theory, but feel confused whether “a scientific field” is even the right way to work on top of these premises

  • I feel generally skeptical of fields that are too occupied with their own existence or trying to study its own effects, not because it’s not real, but because it’s just really hard and has all kinds of bad cognitive attractors

I might also give reading or listening to some of the materials you sent over a try, and might leave comments with additional impressions.