As an layman who reads little academic micro- and macroeconomics, I can’t but think that I am already familiar with the “Commoditize Your Complement.” I lack the expertise to frame this properly, but I suspect it relates to concepts like “platform envelopment”, “envelopment of complements”, “platform stacks” when discussing competition in platform economies.
Nick Nolan
If Scrooge McDuck’s downtown Duckburg apartment rises in price, and Scrooge’s net worth rises equally, but nothing else changes, the distribution of purchasing power is now more unequal — fewer people can afford that apartment. But nobody is richer in terms of actual material wealth, not even Scrooge. Scrooge is only “richer” on paper. The total material wealth of Duckburg hasn’t gone up at all.
Cities generate wealth from the Economies of agglomeration. Bigger cities have higher productivity than smaller cities. Some of that wealth naturally flows into the land price. Here is a good introduction: Scale Economies and Agglomeration
People become actually richer in terms of natural wealth in cities. The source of wealth is not the property and land itself, but the utility value of the land has in producing the positive agglomeration effects. Land is one of the primary ‘factors of production’ (land, labor, capital). Just like farmland, urban land creates value from produced output happening on it. The soil itself has no value without cultivation. Urban land happens to be more productive than farmland and it becomes more valuable.
The solution to this problem is better urban planning. Inefficient land usage is bottleneck for growth in the cities and it should be treated as scare resource and used very efficiently. Just building more, building dense and have good infrastructure helps to reduce the cost of living in high productivity areas.
I have started to see the Instrumental convergence problem as a part of human-to-human aliment problem.
E. Glen Weyl in “Why I Am Not A Technocrat”
Similarly, if we want to have AIs that can play a productive role in society, our goal should not be exclusively or even primarily to align them with the goals of their creators or the narrow rationalist community interested in the AIAP. Instead it should be to create a set of social institutions that ensures that the ability of any narrow oligarchy or small number of intelligences like a friendly AI cannot hold extremely disproportionate power. The institutions likely to achieve this are precisely the same sorts of institutions necessary to constrain extreme capitalist or state power.
[....]
A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc.
Weyl’s technocrat critique is valid in the personal level. It did hit me hard. I have tendency to drift from important messy problems into interesting but difficult problems that might have formal solutions (Is there name for this cognitive bias?) LessWrong community supports this bias drift.
I argue that Instrumental convergence and AI aliment problems are framed incorrectly to make them more interesting to think and easier to solve.
New framing: Intelligent agents (human and nonhuman) aligning constantly to each other. Solving instrumental convergence is equal to solving the society. We can’t solve it once and for all, but we can create process and institutions that adjust and manage problems that arise.
Typical scenarios are superpower+superintelligence, ruling party + superintelligence, Zuck+Superintelligence, Chairman Xi + Superintelligence, Alphabet board of directors + Superintelligence.
From Buddhist Phenomenology by Henk Barendrekt