Your main idea of markets, networks, and democratic systems sharing a common structure is compelling! I’m curious about the different methods of analysis proposed by researchers in each of these fields:
Economists model markets with one formalism. Network scientists study information diffusion with another. Political scientists analyze voting with a third.
Do you have any thoughts as to why this is the case? I buy the claim that these are all related, but wonder if there are strengths posed by any of these methods that the spectral signals approach fails to satisfy. I suppose this falls into your “proving spectral-behavioral correspondences” direction, so I’m excited for more updates on this topic.
I also find this point particularly exciting:
The higher eigenvalues reveal something different: the network’s capacity for complex patterns of belief. A network with only one significant eigenvalue can sustain only binary disagreement—you’re either in group A or group B. A network with many well-separated eigenvalues can maintain richer structure: multiple factions, nested coalitions, opinions that don’t collapse onto a single axis. The spectral distribution measures what we might call the network’s “cognitive complexity.”
Perhaps a multi-agent alignment research direction could be to create networks with higher cognitive complexity, with the goal of limiting the persuasion effect of any single agent? This is probably more compelling if you have a high probability of models being mostly aligned, and are misaligned in mostly distinct ways.
Your main idea of markets, networks, and democratic systems sharing a common structure is compelling! I’m curious about the different methods of analysis proposed by researchers in each of these fields:
Do you have any thoughts as to why this is the case? I buy the claim that these are all related, but wonder if there are strengths posed by any of these methods that the spectral signals approach fails to satisfy. I suppose this falls into your “proving spectral-behavioral correspondences” direction, so I’m excited for more updates on this topic.
I also find this point particularly exciting:
Perhaps a multi-agent alignment research direction could be to create networks with higher cognitive complexity, with the goal of limiting the persuasion effect of any single agent? This is probably more compelling if you have a high probability of models being mostly aligned, and are misaligned in mostly distinct ways.