Joseph Heinrich’s The Secret of Our Success makes a compelling case that the central engine of human progress is combinatorial: ideas are modular components, and progress happens when the right components find each other. The key variable isn’t individual genius — it’s how well ideas spread through a network of people.
It’s a power law — a society with 10,000 people is exponentially better at finding combinations than a smaller one, and this is a factor in how Heinrich explains the quicker development of Eurasia.
I haven’t read The Secret of Our Success, but on Jared Diamond’s model (which seems plausible) it was not just a greater population but also its being longitudinally oriented (longer distances that can be traveled easily, because of a similar climate, unlike in the Americas and Africa), and availability of large economically useful animals (Mesoamerica had nothing, the Andes had only llamas and alpacas).
Or in other words, you’d need something like a Langlands program for collective intelligence. The Langlands program is maybe the greatest example of the applied category theory move in mathematics — taking number theory and geometry, which looked like completely separate fields, and finding the deep structural correspondences between them. Not reducing one to the other. Finding the dictionary.
So this is what I’m currently trying to do. Open research at a very specific intersection, trying to generate a compositional basis that lets you move between these fields while preserving what each one actually knows. On the theory side, working toward a mathematical unification — what are the shared structures between agent foundations, governance mechanisms, biological coordination, and cooperative AI? On the implementation side, building functional simulation infrastructure to verify that the compositions actually compute something rather than just looking pretty on paper.
Cool, I like this formulation/exposition.
One caveat I’d add is that an important disanalogy between the original Langlands program (AFAICT?) is that math (or the specific areas of math involved in the program) back then was much more mature than the disciplines you’re trying to langland[1] here. E.g., I think mainstream biology might be missing some key mature teleological concepts,[2] which might be supplied by AF, if AF were more mature itself. I don’t know much about computational social science or cooperative AI (I probably have some bits related to the latter acquired by osmosis that I don’t associate with “cooperative AI”). TLDR, I suspect that to langland successfully here, you will need to be willing to do quite a bit of ontological breaking and reshaping (which makes the entire thing harder than the original Langland, at least along some axes, but hey).
and people who have some good intuitions about them often don’t use those intuitions as starting points to try to think clearly about the topic; e.g. this is my sense about what’s going on with Denis Noble (where my data on him is a podcast lecture and a recounting of the tldr of his views by Dawkins, followed by criticism, which was clear and ~correct but was also missing what I saw as an important generative intuition)
See also https://sites.santafe.edu/~wbarthur/thenatureoftechnology.htm
I haven’t read The Secret of Our Success, but on Jared Diamond’s model (which seems plausible) it was not just a greater population but also its being longitudinally oriented (longer distances that can be traveled easily, because of a similar climate, unlike in the Americas and Africa), and availability of large economically useful animals (Mesoamerica had nothing, the Andes had only llamas and alpacas).
Cool, I like this formulation/exposition.
One caveat I’d add is that an important disanalogy between the original Langlands program (AFAICT?) is that math (or the specific areas of math involved in the program) back then was much more mature than the disciplines you’re trying to langland[1] here. E.g., I think mainstream biology might be missing some key mature teleological concepts,[2] which might be supplied by AF, if AF were more mature itself. I don’t know much about computational social science or cooperative AI (I probably have some bits related to the latter acquired by osmosis that I don’t associate with “cooperative AI”). TLDR, I suspect that to langland successfully here, you will need to be willing to do quite a bit of ontological breaking and reshaping (which makes the entire thing harder than the original Langland, at least along some axes, but hey).
yes, I’m hereby verbing this noun and let it go viral because I think it’s a good concept to have
and people who have some good intuitions about them often don’t use those intuitions as starting points to try to think clearly about the topic; e.g. this is my sense about what’s going on with Denis Noble (where my data on him is a podcast lecture and a recounting of the tldr of his views by Dawkins, followed by criticism, which was clear and ~correct but was also missing what I saw as an important generative intuition)