Crocker’s rules.
I’m nobody special, and I wouldn’t like the responsibility which comes with being ‘someone’ anyway.
Reading incorrect information can be frustrating, and correcting it can be fun.
My writing is likely provocative because I want my ideas to be challenged.
I may write like a psychopath, but that’s what it takes to write without bias, consider that an argument against rationality.
Finally, beliefs don’t seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Whoever claims to be fully truth-seeking is not entirely honest.
I think the problem with Moloch is one of one-pointedness, similar to metas in competitive videogames. If everyone has their own goals and styles, then many different things are optimized for, and everyone can get what they personally find to be valuable. A sort of bio-diversity of values.
When, however, everyone starts aiming for the same thing, and collectively agreeing that only said thing has value (even at the cost of personal preferences) - then all choices collapse into a single path which must be taken. This is Moloch. A classic optimization target which culture warns against optimizing for at the cost of everything else is money. An even greater danger is that a super-structure is created, and that instead of serving the individuals in it, it grows at the expense of the individuals. This is true for “the system”, but I think it’s a very general Molochian pattern.
Strong optimization towards a metric quickly results in people gaming said metric, and Goodhart’s law kicks in. Furthermore, “selling out” principles and good taste, and otherwise paying a high price in order to achieve ones goals stops being frowned upon, and instead becomes the expected behaviour (example: Lying in job-interviews is now the norm, as is studying things which might not interest you).
But I take it you’re refering to the link I shared rather than LW’s common conception of Moloch. Consciousness and qualia emerged in a materialistic universe, and by the darwinian tautology, there must have been an advantage to these qualities. The illusion of coherence is the primary goal of the brain, which seeks to tame its environment. I don’t know how or why this happened, and I think that humans will dull their own humanity in the future to avoid the suffering of lacking agency (SSRIs and stimulants are the first step), such that the human state is a sort of island of stability. I don’t have any good answers on this topic, just some guesses and insights:
1: The micro dynamics of humanity (the behaviour of individual people) are different from the macro mechanics of society, and Moloch emerges as the number of people n tends upwards. Many ideal things are possible at low n’s almost for free (even communism works at low n!), and at high n’s, we need laws, rules, regulations, customs, hierarchical structures of stablizing agents, etc etc—and even then our systems are strained. There seems to be a law similar to the square-cube law which naturally limits the size things can have (the solution I propose to this is decentralization)
2: Metrics can “eat” their own purpose, and creations can eat their own creators. If we created money in order to get better lives, this purpose can be corrupted so that we degrade our lives in order to get more money. Morality is another example of something which was meant to benefit us but now hangs as a sword above our heads. AGI is trivially dangerous because it has agency, but it seems that our own creations can harm us even if they have no agency whatsoever (or maybe agency can emerge? Similar to how ideas gain life memetically).
3: Perhaps there can exist no good optimization metrics (which is why we can’t think of an optimization metric which won’t destroy humanity when taken far enough). Optimization might just be collapsing many-dimensional structures into low-dimensional structures (meaning that all gains are made at an expense, a law of conservation). Humans mostly care about meeting needs, so we minimize thirst and hunger, rather than maximizing water and food intake. This seems like a more healthy way to prioritize behaviour. “Wanting more and more” seems like a pathology than natural behaviour—one seeks the wrong thing because they don’t understand their own needs (e.g. attempting to replace the need for human connection with porn), and the dangers of pathology used to be limited because reality gatekept most rewards behind healthy behaviour. I don’t think it’s certain that optimality/optimization/self-replication/cancer-like-growth/utility are good-in-themselves like we assume. They’re merely processes which destroy everything else before destroying themselves, at least when they’re taking to extremes. Perhaps the lesson is that life ceases when anything is taken to the extreme (a sort of dimensional collapse), which is why Reversed Stupidity Is Not Intelligence even here.