Great post! Some extensions I’d be interested in:
Model multiple different resources / specializations, perhaps with a space
of skillsets / resource distributions, and everyone getting a random ~unit vector in said space (maybe even with nonuniform distribution on direction to model some skills / resources being more scarce than others). This would probably need an N that’s a bit larger, but I think it would be cool to see groups forming and selecting people based on their niches. Could be an extension of the “resource pool” thing, with multiple types of resources.Introduce a maximum group size (or perhaps simply have starting opinion decrease monotonically with group size), to help deal with the combinatorial explosion and to model the fact that humans don’t really keep track of most of the large possible groups of people.
Similar to the point above, you could sort of “clamp” the probability distribution by only sampling from the top n highest-opinion groups for each given person (i.e. if my opinions are [0.01, 0.5, 0.21, 0.07, 0.32] and n=3 then I sample as if it’s [0, 0.5, 0.21, 0, 0.32]. Idk if this operation has a name. It’s probably similar in effect to softmax but computationally easier). This is probably more accurate to what actual humans do: we can have opinions of really large groups, like Google or the Democratic Party, but there is a (soft) maximum total number of groups we can have opinions of, due to memory and computation constraints.
Make the opinions bidirectional: groups have opinions of all of their members, add them to the set uniformly sampled over, and then the
is drawn from [0, 1], to better model that, in general, entities give more to entities that they have a higher opinion of. Obviously this will make the computation take quite a bit longer. Maybe have the sampling massively down-weight bigger groups, or perhaps make it based on the average opinions of groups. You could perhaps do this elegantly by making each time step contain two actions: first, a random person contributes to a random group they are part of, sampled based on the person’s opinions. Then, samples a random member based on its opinions, and gives them a random contribution. This will multiply simulation time by a factor of two but will perhaps give more interesting results!
I think even just increasing N is likely to yield some interesting results. Some galaxy-brained parallelization (maybe using the GPU?) can probably be done to make this more feasible (I haven’t looked at your code yet, so if you’re already doing something like this, massive props to you. I have ~no idea where I’d even start on that besides “ask Claude”).
P.S. There are some minor typographical errors, including a link that doesn’t link and some
Say more? I only know about enthalpy in the physical sense. What does it mean here, and how would switching to free energy change things?