Upcoming workshop on Post-AGI Civilizational Equilibria

This workshop will address the technical and institutional questions of how to safeguard human interests after AI surpasses human abilities.

This workshop will build on many domains, including political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency. We hope to gather insights about the roles humans could play in a world transformed by AGI, and which positive equilibria are stable, if any.

Speakers so far:

Discussion will center on empirical questions such as:

  • Could alignment of single AIs to single humans be sufficient to solve global coordination problems?

  • Will agency tend to operate at ever-larger scales, multiple scales, or something else?

  • Are there multiple, qualitatively different basins of attraction of future civilizations?

  • Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?

  • What empirical evidence could help us tell which trajectory we’re on?

For more information, see the event website or fill out the expression of interest form.

Don’t wait for the workshop—give your takes on these questions here too!

No comments.