All of these examples seem like different variations of how to account for problem information.
I am reminded of a blog post about algorithms in scientific computing. Boo-hiss, I know, but—the claim of the blog post is that algorithmic efficiency is about problem information, and the more information the algorithm can capture about the problem the more efficient it can be. The example in support of the claim is the solving of linear systems of equations, and I establish relevance in this way: linear systems of equations are used in linear programming, which was co-invented by Leonhid Kantorovich during WWII in the Soviet Union for solving problems of the centralized economy. It features in the book Red Plenty, reviewed at SlateStarCodex many moons ago. In this way the blog post bears on how the abstractions which underlie the story of this section of the book work.
Applying these intuitions, it feels to me like the attribution of the success of Fordism and Taylorism to centralization is a mistake: the value of the systems they built came from capturing important additional information about the problems of industry, and not from centralization per se. Centralization is the means of transmitting the important information to where it needs to go, like the needs of different places for materials, labor, parts, or the existence of better processes and procedures. The heuristic for the success of an org becomes whether it can gather and process the important information for the problem(s) the org is solving, and avoiding too much other stuff. Avoiding other stuff feels necessary because it seems inevitable to me that it will manifest as noise from the perspective of problem information.
Also under this lens, the arguments from labor and counterculture can be recast as ignoring critical information about problems (I would put the environmental consequences of industry here) or entire classes of problems (I would put all the social ones here).
The problem information lens is easy to apply for already well-defined problems, like speeding up an assembly line or finding the correct combination of blast-furnace inputs, but I notice feeling distinctly unsatisfied with it from the standpoint of uncovering new problems. This’ll need more chewing.
All of these examples seem like different variations of how to account for problem information.
I am reminded of a blog post about algorithms in scientific computing. Boo-hiss, I know, but—the claim of the blog post is that algorithmic efficiency is about problem information, and the more information the algorithm can capture about the problem the more efficient it can be. The example in support of the claim is the solving of linear systems of equations, and I establish relevance in this way: linear systems of equations are used in linear programming, which was co-invented by Leonhid Kantorovich during WWII in the Soviet Union for solving problems of the centralized economy. It features in the book Red Plenty, reviewed at SlateStarCodex many moons ago. In this way the blog post bears on how the abstractions which underlie the story of this section of the book work.
Applying these intuitions, it feels to me like the attribution of the success of Fordism and Taylorism to centralization is a mistake: the value of the systems they built came from capturing important additional information about the problems of industry, and not from centralization per se. Centralization is the means of transmitting the important information to where it needs to go, like the needs of different places for materials, labor, parts, or the existence of better processes and procedures. The heuristic for the success of an org becomes whether it can gather and process the important information for the problem(s) the org is solving, and avoiding too much other stuff. Avoiding other stuff feels necessary because it seems inevitable to me that it will manifest as noise from the perspective of problem information.
Also under this lens, the arguments from labor and counterculture can be recast as ignoring critical information about problems (I would put the environmental consequences of industry here) or entire classes of problems (I would put all the social ones here).
The problem information lens is easy to apply for already well-defined problems, like speeding up an assembly line or finding the correct combination of blast-furnace inputs, but I notice feeling distinctly unsatisfied with it from the standpoint of uncovering new problems. This’ll need more chewing.