just another emergent phenomenon of cellular automata
joshuabecker
Practical Considerations Regarding Political Polarization
It’s worth considering the effects of the “exploration/exploitation” tradeoff: decreasing coordination/efficiency can increase the efficacy of search in problem space over the long run, precisely because efforts are duplicated. When efforts are duplicated, you increase the probability that someone will find the optimal solution. When everyone is highly coordinated, people all look in the same place and you can end up getting stuck in a “local optimum”—a place that’s pretty good, but can’t be easily improved without scrapping everything and starting over.
It should be noted that I completely buy the “lowest hanging fruit is already picked” explanation. The properties of complex search have been examined somewhat in depth by Stuart Kauffman (“nk space”). These ideas were developed with biological evolution in mind but have been applied to problem solving. In essence, he quantifies the intuition you can improve low-quality things with a lot less search time than it takes to improve high-quality things.
These are precisely the types of spaces in which coordination/efficiency is counterproductive.
Can you help me understand why you see this as a a coordination problem to be solved? Should I infer that you don’t buy the “lowest hanging fruit is already picked” explanation?
Regarding the apparent non-scaling benefits of history: what you call the “most charitable” explanation seems to me the most likely. Thousands of people work at places like CERN and spend 20 years contributing to a single paper, doing things that simply could not be done by a small team. Models of problem-solving on “NK Space” type fitness landscapes also support this interpretation: fitness improvements become increasingly hard to find over time. As you’ve noted elsewhere, it’s easier to pluck low-hanging fruit.
I assume by ‘linear’ you mean directly proportional to population size.
The diminishing marginal returns of some tasks, like the “wisdom of crowds” (concerned with forming accurate estimates) are well established, and taper off quickly regardless of the difficulty of the task—it’s basically follows the law of large numbers and sample error (see “A Note on Aggregating Opinions”, Hogarth, 1978). This glosses over some potential complexity but you’re probably unlikely to do ever get much benefit from more than a few hundred people, if that many.
Other tasks do not see such quickly diminishing returns, such as problem solving in a complex fitness landscape (see work on “Exploration and Exploitation” especially in NK space). Supposing the number of possible solutions to a problem to be much greater than the number of people feasibly working on the problem (e.g., the population of creative and engaged humans) then as the number of people increase, the probability of finding the optimal solution increases. Coordinating all those people is another issue, as is the potential opportunity cost of having so many people work on the same problem.
However, in my experience, this difference between problem-solving and wisdom-of-crowds tasks is often glossed over in collective intelligence research.
Update: this is a pretty large field of research now. The Collective Intelligence Conference is going into its 7th year.
Are you still in Chicago? There was recently a gathering at the Art Museum garden with ~30 people in attendance, and a few people were discussing trying to keep the momentum, myself included. If you are around, I would like to invite you to give it another go. Regardless of your current location, I’d be curious to hear more details about your particular experience in this locale.
E[x]=0.5
even for the frequentist, and that’s what we make decisions with, so focusing on p(x) is a bit of misdirection. The whole frequentist-vs-bayesian culture war is fake. They’re both perfectly consistent with well defined questions. (They have to be, because math works.)
And yes to everything else, except...
As to whether god plays dice with the universe… that is not in the scope of probability theory. It’s math. Your Bayesian is really a pragmatist, and your frequentist is a straw person.
Great post!
Good thing the author is dead!
I like to think that.… facing the only true existential threat, the author found the cognitive limits of rationality and got the fear. So, unbeknownst to themself, summoned ex machina an article of Faith to keep them warm, for the night is dark and full of terror.
If I’m interested in learning about the claims made by the science/study of decision-making, and not looking to make decisions myself (so perhaps exercises don’t matter?) would that change your recommendation? You can further assume that I am moderately well trained in probability theory.
Is the code for this available?
Sure, though the question of “why is science slowing down” and “what should we do now” are two different questions. If the answer of “why is science slowing down” is simply because—it’s getting harder.… then there may be absolutely nothing wrong with our coordination, and no action is required.
I’m not saying we can’t do even better, but crisis-response is distinct from self-improvement.