You should engage with Ray Dalio’s theories. He spent his career and life successfully mapping rise and fall of civs. This is the main feedback.
This is ambitious work, but too ambitious to critique seriously as is. As this is a distilled version, it needs crystal-clear logic in the arguments you chose to highlight. In general, there may good ideas here but you take too much for granted.
Some pushback on your ideas:
On your 4 horsemen, I disagree to various degree with all of them. Here are some points to get started.
What about internal threats and internal pressure?
I agree most in spirit here. But overall we do se offspring increase with abundance. If you look at population level growth over time and not fecundity. Keep in mind Earth has limited resources, but what if you can keep scaling? I question the principle as is.
This is clever and familiar from social sciences, but still seems to be poorly argued, at least as a stand-alone idea you can generalize beyond notable examples.
Innovation and critical inquiry often thrives under pressure. You also have to argue why you think myth building is the main foundation of civs compared to say epistemic coherency. Coherence is actually important to coordination whether the epistemology is sound or not.
Good effort making your point, but administration easily grows complex and you cannot just circumvent this when you scale. Iron law of oligarchy is a thing you can look up and it applies to successful and failing businesses alike. Trust me, I know… You don’t need a “successful” state to have a complex administration that stagnates.
(Also, the thermodynamic drift invocation bothers me a bit but I get the point, I’ll try to be less grumpy.)
On AI failure modes, I’d like to know why you focused on these in particular.
This is helpful pushback. You’re right that the distillation takes too much for granted. Compressing the 100k+ word framework into ~1000 words lost the load-bearing bits.
On Dalio: agreed, and I should engage his empirical work more explicitly. On the Horsemen: I think we may be agreeing more than it appears (e.g., on bureaucratic complexity being inevitable), but the post failed to show that.
I failed to calibrate entirely to the audience here, being inside my work for too long. I’ll reconsider my approach.
Regarding these AI failure modes, they emerge systematically as violations of one or more of the Four Virtues (Integrity, Fecundity, Harmony, Synergy), which are themselves derived as the optimal solutions to the Four Axiomatic Dilemmas of SORT axes. This was intended as evidence that the framework is something real and useful.
Yes, LW is a tough crowd. It was so 20 years ago and it is so today. I am not a good rep. of LW culture, but I do think no matter where you post this, that it would be useful to have an 8K summary as well.
I suspect that it is inevitable to lose load-bearing stuff and to also confuse parts of the LW audience in 1K words, but you need the hook to attract readers to the 8K summary.
You should engage with Ray Dalio’s theories. He spent his career and life successfully mapping rise and fall of civs. This is the main feedback.
This is ambitious work, but too ambitious to critique seriously as is. As this is a distilled version, it needs crystal-clear logic in the arguments you chose to highlight. In general, there may good ideas here but you take too much for granted.
Some pushback on your ideas:
On your 4 horsemen, I disagree to various degree with all of them. Here are some points to get started.
What about internal threats and internal pressure?
I agree most in spirit here. But overall we do se offspring increase with abundance. If you look at population level growth over time and not fecundity. Keep in mind Earth has limited resources, but what if you can keep scaling? I question the principle as is.
This is clever and familiar from social sciences, but still seems to be poorly argued, at least as a stand-alone idea you can generalize beyond notable examples.
Innovation and critical inquiry often thrives under pressure. You also have to argue why you think myth building is the main foundation of civs compared to say epistemic coherency. Coherence is actually important to coordination whether the epistemology is sound or not.
Good effort making your point, but administration easily grows complex and you cannot just circumvent this when you scale. Iron law of oligarchy is a thing you can look up and it applies to successful and failing businesses alike. Trust me, I know… You don’t need a “successful” state to have a complex administration that stagnates.
(Also, the thermodynamic drift invocation bothers me a bit but I get the point, I’ll try to be less grumpy.)
On AI failure modes, I’d like to know why you focused on these in particular.
This is helpful pushback. You’re right that the distillation takes too much for granted. Compressing the 100k+ word framework into ~1000 words lost the load-bearing bits.
On Dalio: agreed, and I should engage his empirical work more explicitly.
On the Horsemen: I think we may be agreeing more than it appears (e.g., on bureaucratic complexity being inevitable), but the post failed to show that.
I failed to calibrate entirely to the audience here, being inside my work for too long. I’ll reconsider my approach.
Regarding these AI failure modes, they emerge systematically as violations of one or more of the Four Virtues (Integrity, Fecundity, Harmony, Synergy), which are themselves derived as the optimal solutions to the Four Axiomatic Dilemmas of SORT axes. This was intended as evidence that the framework is something real and useful.
Glad you found the feedback somewhat useful.
Yes, LW is a tough crowd. It was so 20 years ago and it is so today. I am not a good rep. of LW culture, but I do think no matter where you post this, that it would be useful to have an 8K summary as well.
I suspect that it is inevitable to lose load-bearing stuff and to also confuse parts of the LW audience in 1K words, but you need the hook to attract readers to the 8K summary.