Cultural/moral maturity (in a civilization) has never been observed before, similarly to technological maturity. Scalable production of a new kind of thing brings its abundance in sight, which fails to be a concern earlier, while it couldn’t be scaled. A moderate level of AI alignment or of cultural change is not an equilibrium if these things are anchored to scalable resources (effective cognition and coordination, fast subjective serial time). Instead they reach extremes of the kind never observed before those resources become scalable.
A pre-abundance precedent about X offers poor framing for thinking about the consequences of discovering a scalable process of producing X. Before abundance, it’s artisanal and quirky and path-dependent, the extremes are rare and dysfunctional, so people don’t worry about it too much. There is security in it looking like an equilibrium, but not being truly settled, so that people can influence things.
Abundance brings maturity, changes the character of the equilibrium. So not foom necessarily, just a promise of maturity at some point, which wouldn’t have been as salient before there is a scalable process of production. And there is an excuse of ignoring the possibility even longer, because of the total lack of historical precedent (of the associated problems).
i’d be interested in hearing why you think that cultural/moral/technological/mathematical maturity is even possible or eventually likely (as opposed to one just being immature forever[1]) (assuming you indeed do think that)
I mean “maturity” merely compared to how we view what can currently be happening, such as a baseline level of competence in civilization-level governance, or what the individual people are capable of. Maturity compared to that baseline washes away all the currently relevant fiddly things, replacing them by settled processes.
These new processes are truly settled, so whatever new concerns become important then, the new baseline won’t be overturned. The analogy with technological maturity is that the laws of physics and ways of getting things done within them is a fixed problem statement, so new baselines of effectiveness get locked in.
Cultural/moral maturity (in a civilization) has never been observed before, similarly to technological maturity. Scalable production of a new kind of thing brings its abundance in sight, which fails to be a concern earlier, while it couldn’t be scaled. A moderate level of AI alignment or of cultural change is not an equilibrium if these things are anchored to scalable resources (effective cognition and coordination, fast subjective serial time). Instead they reach extremes of the kind never observed before those resources become scalable.
Are you trying to say that for any X, instead of X-maturity, we should instead expect X-foom until the marginal returns get too low?
A pre-abundance precedent about X offers poor framing for thinking about the consequences of discovering a scalable process of producing X. Before abundance, it’s artisanal and quirky and path-dependent, the extremes are rare and dysfunctional, so people don’t worry about it too much. There is security in it looking like an equilibrium, but not being truly settled, so that people can influence things.
Abundance brings maturity, changes the character of the equilibrium. So not foom necessarily, just a promise of maturity at some point, which wouldn’t have been as salient before there is a scalable process of production. And there is an excuse of ignoring the possibility even longer, because of the total lack of historical precedent (of the associated problems).
i’d be interested in hearing why you think that cultural/moral/technological/mathematical maturity is even possible or eventually likely (as opposed to one just being immature forever[1]) (assuming you indeed do think that)
which seems more likely to me
I mean “maturity” merely compared to how we view what can currently be happening, such as a baseline level of competence in civilization-level governance, or what the individual people are capable of. Maturity compared to that baseline washes away all the currently relevant fiddly things, replacing them by settled processes.
These new processes are truly settled, so whatever new concerns become important then, the new baseline won’t be overturned. The analogy with technological maturity is that the laws of physics and ways of getting things done within them is a fixed problem statement, so new baselines of effectiveness get locked in.