Announcing the Alignment of Complex Systems Research Group

tl;dr: We’re a new alignment research group based at Charles University, Prague. If you’re interested in conceptual work on agency and the intersection of complex systems and AI alignment, we want to hear from you. Ideal for those who prefer an academic setting in Europe.

What we’re working on

Start with the idea of an “alignment interface”: the boundary between two systems with different goals:

As others have pointed out, there’s a whole new alignment problem at each interface. Existing work often focuses on one interface, bracketing out the rest of the world.

e.g. the AI governance bracket

The standard single-interface approach assumes that the problems at each alignment interface are uncoupled (or at most weakly coupled). All other interfaces are bracketed out. A typical example of this would be a line of work oriented toward aligning the first powerful AGI system with its creators, assuming the AGI will solve the other alignment problems (e.g. “politics”) well.

Against this, we put significant probability mass on the alignment problems interacting strongly.

For instance, we would expect proposals such as Truthful AI: Developing and governing AI that does not lie to interact strongly with problems at multiple interfaces. Or: alignment of a narrow AI system which would be good at modelling humans and at persuasion would likely interact strongly with politics and geopolitics.

Overall, when we take this frame, it often highlights different problems than the single-interface agendas, or leads to a different emphasis when thinking about similar problems.

(The nearest neighbours of this approach are the “multi-multi” programme of Critch and Krueger, parts of Eric Drexler’s CAIS, parts of John Wentworth’s approach to understanding agency, and possibly this.)

If you broadly agree with the above, you might ask “That’s nice – but what do you work on, specifically?” In this short intro, we’ll illustrate with three central examples. We’re planning longer writeups in coming months.

Hierarchical agency

Many systems have several levels which are sensibly described as an agent. For instance, a company and its employees can usually be well-modelled as agents. Similarly with social movements and their followers, or countries and their politicians.



Hierarchical agency: while the focus of e.g. game theory is on “horizontal” relations (violet), our focus is on “vertical” relations, between composite agents and their parts.


So situations where agents are composed of other agents are ubiquitous.

A large amount of math describes the relations between agents at the same level of analysis: this is almost all of game theory. Thanks to these, we can reason about cooperation, defection, threats, correlated equilibria, and many other concepts more clearly. Call this tradition “horizontal game theory”.

We don’t have a similarly good formalism for the relationship between a composite agent and its parts (superagent and subagent).[1] Of course we can think about these relationships informally: for example, if I say “this group is turning its followers into cultists”, we can parse this as a superagent modifying and exploiting its constituents in a way which makes them less “agenty”, and the composite agent “more agenty”. Or we can talk about “vertical conflicts” between for example a specific team in a company, and the company as a whole. Here, both structures are “superagents” with respect to individual humans, and one of the resources they fight over is the loyalty of individual humans.

What we want is a formalism good for thinking about both upward and downward intentionality. Existing formalisms like social choice theory often focus on just one direction—for example, the level of individual humans is taken as the fundamental level of agency, and we use some maths to describe how individuals can aggregate their individual preferences into a collective preference. Other formal models—such as the relationship between a market and traders—have both “up” and “down” arrows, but are too narrow in what sort of interactions they allow. What we’re looking for is more like a vertical game theory.

We think this is one of the critical bottlenecks for alignment.[2]

Alignment with self-unaligned agents

Call the system we are trying to align the AI with the “principal” (e.g. an individual human).

Many existing alignment proposals silently assume the principal is “self-aligned”. For instance, we take a “human” as having consistent preferences at a given point in time. But the alignment community and others have long realised that this does not describe humans, who are inconsistent and often self-unaligned.


A toy model: system H is composed of multiple parts p_i with different wants (eg. different utility functions), a shared world model, and an aggregation mechanism Σ (eg. voting). An alignment function f(H) yields (we hope) an aligned agent A.

If the principal is not self-aligned, then “alignment with the principal” can have several meanings, potentially incompatible with each other. Often, alignment proposals implicitly choose just one of these. Different choices can lead to very different outcomes, and it seems poorly understood how to deal with the differences.

For some characterizations of alignment, it seems likely that an apparently successfully(!) aligned AI will have an instrumental convergent goal to increase the self-alignment of the principal. However, putting humans under optimization pressure to become self-aligned seems potentially dangerous. Again, this problem seems understudied.

Ecosystems of AI services

This project is inspired by parts of Drexler’s “Comprehensive AI Services”.

Consider not just one powerful AGI, or several, but an entire “AI ecosystem” with different systems and services varying in power, specialization, and agency. The defining characteristic of this scenario is that no single agent has sufficient power to take over the ecosystem.

From here, protecting humans and their values looks different than in the classic single/​single scenario. On the other hand, it seems quite likely that there is a deep connection between safety issues in the “AI in a box” case, and in the ecosystems case.

From the ecosystem view, different questions seem natural. We can ask about:

  • the basins of attraction of convergent evolution where some selection theorem applies (pushing the system toward a convergent “shape”);

  • the emergence of “molochs”—emergent patterns that acquire a certain agency and start to pursue some goals of their own (a problem similar to Christiano’s “influence seeking patterns”). These could arise from emergent cooperation or self-enforcing abstractions.

Details in a forthcoming post.

Who we are

We’re an academic research group, supported in parallel by a nonprofit.

On the academic side, we are part of the Center for Theoretical Studies at Charles University, a small interdisciplinary research institute with experts in fields such as statistical physics, evolutionary biology, cultural anthropology, phenomenology or macroecology.

Our non-academic side allows us greater flexibility, access to resources, and lets us compensate people at a competitive rate.

The core of the group is Jan Kulveit and Tomáš Gavenčiak; our initial collaborators include Mihaly Barasz and Gavin Leech.

Why we went with this setup

  • academia contains an amazing amount of brainpower and knowledge (which is sometimes underappreciated in the LessWrong community);

  • we want easy access to this;

  • also, while parts of the academic incentive landscape and culture are dysfunctional and inadequate, parts of academic culture and norms are very good. By working partially within this, we can benefit from this existing culture: our critics will help us follow good norms.

  • Also, we want to make it easy for people aiming for an academic career to work with us; we expect working with us is much less costly in academic CV points than independent research or unusual technical think-tanks.

What we are looking for

We are fully funded and will be hiring:

  • researchers

  • project manager (with an emphasis on the rare and valuable skill of research management);

Also: we can supervise MSc and PhD dissertations, meaning you can work with us while studying for an MSc, or work with us as your main doctoral programme. (You would need to fulfil some course requirements, but Charles University has a decent selection of courses in ML and maths.)

  • MSc students;

  • PhD students

If either of the above research programmes are of interest to you, and if you have a relevant background and a commitment to alignment work, we want to hear from you. We expect to open formal hiring in mid-June.

We co-wrote this post with Gavin Leech. Thanks to Chris van Merwijk, Nora Amman, Ondřej Bajgar, TJ and many other for helpful comments and discussion.

  1. ^

    We have various partial or less formal answers in parts of sociology, economics, political science, evolutionary biology, etc.

  2. ^

    A longer explanation in forthcoming posts. Here is a rough and informal intuition pump: hierarchical agency problems keep appearing at many many places of the alignment landscape, and also at many other disciplines. For example, if we are trying to get AIs aligned with humanity, it’s notable that various superagents have a large amount of power and agency, often larger than individual humans. In continuous takeoff scenarios this is likely to persist, even if the substrate they are running on gradually changes from humans to machines. Or, even in the case of single human, we may think about the structure of a single mind in these terms. To get the intuition, compare how our thinking about cooperation and conflict would be more confused and less precise without the formalisms and results of game theory and it’s descendants: also, we would notice various problems ranging from evolution to politics to have some commonalities, and we would also have some natural language understanding, but overall, our thinking would be more vague and confused.