The Great Ideological Conflict: Intuitionists vs. Establishmentarians

Link post

This builds off of Conflict Theorists vs. Mistake Theorists, but I disagree ever-so slightly with a few of Alexander’s examples and some of the details, which I will clarify here.

Generally speaking, the name(s) for the different teams are basically accurate, in my estimation. Mistake Theorists are named so for the reason that they typically view most human-problems as being driven by mistake-making. Human beings, to Mistake Theorists, are—like all organisms or things created by them—prone to error. It is not that any human really, genuinely wishes harm to come to themselves or others, but rather that our mistaken philosophies, momentary lapses in judgement, or honest-but-ultimately-wrong solutions to both personal as well as society-wide issues is the biggest, if not the only, factor in what ultimately creates suffering.

Mistake Theorists generally believe that the best way for human civilization to reduce suffering as much as possible is for society to teach people principles and values in accordance with reducing suffering and for people to do their best in abiding by those principles and values. Society is not perfect, nor can it totally eliminate suffering or any of the root causes of such, but it is the best source of error-free or least-error-prone knowledge that we have, due to having tried-and-tested different solutions over a time span much longer than the lifespan of a single human. Mistake Theorists would probably advise someone that, if they had a major personal problem that needed to be solved, not to try doing so on their own, and not to place their own sense of judgement above that of any trained professionals or experts who were equipped to deal with those sorts of problems.

So given that there appears to be, according to various authors, a “conflict” and two sides in it, one of which is named “Conflict Theorists”, then we can wonder about why a conflict of any kind would emerge here, and try to guess at the fundamentals or basic principles that we could derive or predict the existence of such a conflict from.

Let me try and guess at a basic principle here, expressed in fairly intuitionistic terms:

Mistake Theorists made Conflict Theorists angry, and the latter decided that conflict was reasonable (and not subject to being declared mistake-making).

Let me expand on the parenthetical, there: Suppose your boss tells you, during your yearly review, that you have been performing below-average, and are within the “needs-improvement” category. Your work has been declared sub-par, meaning, definitively within the “not-good-enough” category. Your response to this is—predictably and understandably—to be slightly upset, and to be questioning about it at the very least. It is the definition of “needs improvement /​ not-good-enough” that your work must receive a grade in the acceptable category at the next review time—possibly later, but at some point—in order for you to avoid termination.

You have the option of avoiding the self-assignment of blame (and the associated mental distress that may cause) by choosing to prefer conflict, here. By ‘conflict,’ that means that you have chosen to disagree with your boss’s characterization of your performance. That might entail verbal disagreement up-to-and-including quitting. This way, you can avoid considering yourself to have made any ‘honest’ errors, which would cause yourself mental distress to believe (especially if your own assessment of your work was fairly high).

The greater your own self-assessment of your work, the less likely you are to agree with any criticism of it, and this should be fairly self-evident. However, if your work had been reviewed in an extensive, lengthy way, prior to that assessment being shown to you, the more likely your boss is to disagree with your disagreement, and therefore also assign the label of error to your own high self-assessment.

For a boss to negatively-assess their subordinate’s inner sense of judgement is an inherently risky thing to do, and will increase the chances of causing conflict dramatically. If you ask your subordinate to lower their assessment of their inner sense of judgement in general, you are at the very least asking them to be unhappier on purpose, which is a tall order.

This brings us to what a “Conflict Theorist” is, which is someone who has recognized this dynamic and in an enlightened manner, has chosen to sympathize with the subordinate, here.

A Conflict Theorist is someone who finds the presence of the following to be an indicator of something fishy:

It is the definition of “needs improvement /​ not-good-enough” that your work must receive a grade in the acceptable category at the next review time—possibly later, but at some point—in order for you to avoid termination.

Someone who assigns failing grades—who is not a schoolteacher—to someone who is obviously trying hard, like on a job, and to someone who is not a newbie or inexperienced, is necessarily assigning negative judgement to someone who hasn’t thus far received negative judgement on the quality of their work. So this rating is likely to be anomalous, a priori. They got to be there where they are right now, somehow.

Also, the one who assigns the failing grade is someone who believes that an unfinished job or a job not-quite done well enough is worse than a job never started. If that employee was worth keeping, they wouldn’t be told their work “wasn’t good enough,” they would have been given suggestions for what to do next. Their next iteration of work would build upon the next, which would ensure that they were improving and leveling up their skills.

“They got to be there where they are right now, somehow” is a very important, key component of my model of the problem, here.[1] I envision people as being on trajectories where they perform work for long enough periods to come to think of themselves as contenders for being considered long-term professionals or experts in whatever category of work their trajectory is assigned to. In order for that to be possible, for them to be sitting there in front of your desk before you, they have to have received some amount of positive assessment of their work from their peers, former superiors, as well as from themselves self-reflectively.

The prior probability on your assessment of their performance—if negative—being anomalous (being the first or near-first of its kind) is fairly high, in my estimation. Too much negative assessment, and they are either going to improve so much that that becomes unlikely too, or they are going to leave their line of work entirely. Therefore, if that hasn’t happened, that means they haven’t received much negative assessment on their performance so far. You can conclude that either your assessment is off-base, or that even if it isn’t, they are still more likely than not to be surprised by and therefore disagree with your assessment. Either way, this is a less-than-stellar outcome, unless you were trying to create conflict to begin with.

Conflict Theorists are likely to give the following explanations for why the “needs-improvement” category seems fishy to them:

  • The prior probability on that assessment seems low.

  • There is a defining line between good enough and not good enough that seems arbitrary and capricious.

  • The presence of such a line is evidence that “upper-management” cannot successfully predict its own requirements.

  • It seems inherently conflict-generating.

  • It seems apathetic and callous to whoever is subject to the defining line.

  • The previous point must be ignored, otherwise, the defining line would be lowered to be above only the most extreme situations.

The third bullet point is relevant to the mass-layoffs in Silicon Valley in 2022 and 2023: Many commentators have noted that tech executives blamed themselves for the mistakes which led them to fire tens of thousands of employees. But this blame did not fall on them; The only people it could have fallen on were those who had been laid-off. I think it is reasonable to conclude that those tech executives who were behind the decisions to lay off their employees believed that those employees could not have been transformed into profitable assets for the companies they worked for. This raises immediate questions that we have already brought up: Did they know those employees would be net-liabilities before whatever economic conditions befell them? Did they know when they hired them or shortly thereafter? Were the laid-off all underperformers or were they chosen randomly? Were they from unprofitable segments of the company? Were those segments also judged to be inadequate, and if so, why not just pivot them? Would they consider hiring them back when economic conditions improved?

The main assumption underlying all these questions is that at one point, those employees were determined to be valuable. Also, that the corporation itself bears responsibility for making sure those employees are tasked with making useful, profitable things.

One striking thing I have wondered about is how many of /​ what percentage of those laid-off have been re-hired somewhere else, in an adjacent or near-adjacent vertical. If most of them have been re-hired, then this suggests that either:

  • Human assets shifted away from the companies that hurt more after the pandemic to ones that hurt less. Or:

  • Companies shook off segments /​ employees, pivoted, then re-hired.

Companies reported “growing too fast” over the pandemic, which suggests that most laid-off employees will only be re-hired very slowly. As of this writing, layoffs.fyi reports that 348,231 people have been laid off over 2022 and 2023. Statista says there are roughly 5.2 million tech workers in the United States (note that the laid-off includes workers in other countries, however).

So, I’m going to make a very rough estimation that ~5% of all tech workers were laid off. This makes sense, as the largest tech companies, which own the lion share of the total number of tech workers, all made public statements to that effect, to numbers of roughly that ballpark.

My guess is that most of those people are going to be slowly re-absorbed at lower salaries than they were making before. If “growing too fast” meant that the labor market was more favorable to job-seekers than to employers, then shaking everyone off and “trying again” at a slower speed may have been the strategy that tech executives decided on to correct that big pandemic-era “mistake” they made.

I do wonder if that “mistake” was actually just the natural trend in the job market and expansion at that time, and that the 2022-2023 response is an artificial way to manipulate the job market and lower hiring costs below what they would normally be. Big tech companies colluding with one another to lower the costs of their highest-paid workers is not unheard of. Furthermore, those same big companies mostly performed all the lay-offs and sent public memos to their employees which all said almost exactly the same thing, all at around the same time. Group-think is also not unheard of in Silicon Valley.

“The prior probability on that assessment seems low” point could be applied to segments of companies as well as individual people, and also to—yes—even tech CEOs themselves. This is why I’m skeptical that nearly anyone was actually doing a bad job leading up to those decisions being made. I also forgot to point out that—save for Musk’s assessment of Twitter’s trajectory before he took the helm—it’s not clear that any of the rest of Silicon Valley was headed for certain disaster in the short-term because they were 105% of the ideal size. I have a hard time believing that Amazon, for example, would have been doomed to bankruptcy as a corporation if it had instead retained the employees it laid-off.

That’s why I am inclined to think that most of those company’s statements to the effect that they had made poor decisions during the pandemic that needed to be corrected are probably exaggerated or spun out of whole cloth.

Tech workers are expensive, but they are also considered to be capable of generating lots of value to make up for it, which is often considered the reason they are so expensive. When you fire them or let them go, you’re not getting rid of just a cost and whatever profit you’ve calculated they bring in, you’re getting rid of that entire trajectory of work. Wherever they get re-hired, they have to start over. So even a lay-off suggests that you’ve found their trajectory of work to be untenable.

I think companies would have had to have assumed that removing those workers and their associated work-trajectories removed dead-weight costs, and that the resultant companies were permanently more lean. So the message they are sending to those laid-off workers and to the teams and-or managers who they reported to is that those workers weren’t likely to ever bring in value for the company.

But I think this all points to the general make-up of tech executives to be that of Mistake Theorists rather than Conflict Theorists.

So we’ve described the general conflict-generating dynamic that Mistake Theorists have wrought upon themselves, but this last point I’ve just discussed brings us to what I would consider the interesting part of Conflict Theorizing.

Consider what I said before: When you evaluate a person’s performance negatively, you’re not saying that what they’ve done is almost good, you’re saying that it isn’t good. If you do this, you’re incentivized not just to get rid of the person doing it, but also that whole trajectory of work and its associated ideas and concepts.

A mistake is defined to be an action that should not have been taken at all. It is not an action that would have succeeded had it been executed slightly better. An archer shooting at a target gets many hits and some that don’t hit the target, arranged in a two-dimensional Gaussian. The ones that don’t hit aren’t really mistakes, per se. I’m using the definition of mistake that means “wrong action” in the sense of whether-or-not to take the action at all.

The interesting part is that we can expect that there will be conflicts over entire trajectories of work that have been labeled mistakes at one point or another. These are subjects that I would call “things that would have otherwise been really boring, but have instead become controversial.”

Things like, for example, the Younger Dryas Impact Hypothesis. Look at what Wikipedia has to say:

Members of this group have been criticized for promoting pseudoscience, pseudoarchaeology, and pseudohistory, engaging in cherry-picking of data based on confirmation bias, seeking to persuade via the bandwagon fallacy, and even engaging in intentional misrepresentations of archaeological and geological evidence. For example, physicist Mark Boslough, a specialist in planetary impact hazards and asteroid impact avoidance, has pointed out many problems with the credibility and motivations of individual CRG researchers and as well as with their specific claims for evidence in support of the YDIH and/​or the effects of meteor air bursts or impact events on ancient settlements, people, and environments.

Wow! I can’t say that I can see why a comet-impact theory could have been dismissed as pseudoscience in the same way that Flat-Earth theory would be. Individual researchers’ very credibility as researchers have been questioned, apparently. If those claims are to be taken seriously, that would mean ending the careers of those researchers, as well as superstitiously avoiding the same or similar lines of thought. Those subjects and the names of those who study them become taboo.

These are stronger and more extreme instances of mistake-labeling than if one were to simply say that the YDIH needed more work or needed modifications before it could become established science.

Generating a conflict of this magnitude produces odd counter-theories to theories in the sense that you will observe Mistake Theorist researchers saying “the established science says that theory X didn’t happen” as counter-theory to “theory X” which doesn’t immediately imply “not theory not X” as a corollary. The Mistake Theorists will often claim that theory X implies “not theory not X” and therefore that theory Y, which the establishment currently favors, runs counter to theory X and neither can co-exist simultaneously.

But, generally speaking, scientific theories do not say that a specific phenomenon doesn’t occur. The shape and properties of a scientific theory describe how to predict certain phenomenon in a positive way, which is to say that it describes some but not all characteristics of a system or object of study. This is why physicists usually say that Newtonian physics is true in the limit of small mass objects with velocity much less than the speed of light, and that Relativity is true otherwise.

As I recall, “Flat Earth” theory—as based on a strict literalist Biblical interpretation of creation—is a theory that says “not theory not Flat Earth” and had been generally argued as counter to the prevailing old-(and round)-Earth theory. Therefore, it faced a generally more plausible argument that it was pseudoscience, because it was exclusionary. Biblical literalists didn’t want evolution being taught in schools.

Geology and Geography are sciences where our knowledge gets deeper and more refined over time, such that big chunks are determined a bit sooner and more easily than specific pieces, especially if those pieces occurred very long ago. History has a higher resolution up close in the not-too-distant past, and a lower resolution as things get more distant. Therefore, new discoveries are often about increasing the resolution in places where it had been much lower before. Theories like YDIH are not competing with their lower-resolution predecessors, they are attempts to fill in details that were not there before.[2]

Furthermore, it is not clear to me either that genuinely mistaken science should be considered the same thing as pseudoscience. The latter is meant to be pejorative to the researcher, specifically, not just the science.

I make an identification between Mistake Theorists and Establishmentarians, for the following very simple reason: To claim that someone else is mistaken in their beliefs, in the situation where it is not clear to the one whom ostensibly made a mistake that they actually have, requires doing so from the presumption of authority. The establishmentarian /​ mistake theorist tells someone not to trust their own judgement, because their own judgment has lead to them believing in what the mistake theorist claims is false.[3]

The Conflict Theorist is an intuitionist, because they choose to follow their own judgement over those who claim to have authority over them.

The Conflict Theorist also finds conflict inevitable, because they cannot actually instantiate the Mistake Theorist’s instructions. I can’t do better by thinking I’m worse, and this is simply a fact, as far as I’m concerned. Also, an intuitionist is more likely to trust their conscience as well, and therefore be predisposed to actively disagreeing with the Establishmentarian due to the last two bullet points from the list above:

  • It seems apathetic and callous to whoever is subject to the defining line.

  • The previous point must be ignored, otherwise, the defining line would be lowered to be above only the most extreme situations.

One’s conscience might say that it is okay to fight in some situations, especially if “fight” amounts to no more than verbal disagreement.

We should keep an eye out for other locations where the conflict gets interesting.

Possible locations:

  • “New science” or heterodox theories, especially younger or immature ones.

  • Anything called out or which seems to have active attempts to discredit it.

  • Claims that appear to have suffered from lack-of-replicability.

  • Claims that have been subject to counter-claims of academic misconduct.

  • Anything that claims to be a scientific explanation of something usually considered to be “paranormal.”

  • I would suggest (most strongly) claims which have been subject to criticism that pertains to its lack of quality, presence of mistakes, or points to overall shoddy work, and on hypotheses or projects that have existed for a long time and have more than one person working on them, but attempt to dismiss or deny the entire endeavor.

Its the claims that another researcher isn’t a con-artist or doing intentional fraud, but is wrong just because they are incompetent or unfit to do the work they are doing, of which I am most suspicious.

As a Conflict Theorist, I am predisposed to believing that either only Mistake Theorists commit any real mistakes, or that no one does, and Mistake Theorists are being intentionally malevolent. I actually learn more towards the latter (which implies both), however.

  1. ^

    For the record, I do consider myself to be a Conflict Theorist, if that wasn’t already evident from the way I frame the problem.

  2. ^

    As are theories that posit ancient civilizations that were previously believed not to exist during that time frame and within those locations.

  3. ^

    This is not the case of someone telling an archer who just shot and missed the target that they have missed, as the archer would agree with that. This is the case of someone telling someone else that they are incorrect about something, but the disagreement persists.

No comments.