Pretty much. If an intervention is well outside of the set of experiences of your population, there’s probably a reason for that. Perhaps it’s just too new, but it’s likely that it’s inconsistent with the way the culture usually functions (its values as actually implemented) and/or has fairly obvious side effects.
The simplest and most useful answer is that heritability tells you the amount of variation that environmental factors don’t control*. Traits with very high heritability** are generally going to be worse targets for intervention than traits with low heritability.
*In the range of environments over which the data was collected. The heritability of a trait as measured in Somalia or North Korea may be much lower that as measured in America. You can interpret this as meaning that there is much more hope for useful intervention in Somalia or North Korea, although the practical difficulties may be considerable.
**Some relevant traits are nearly 100% heritable, unfortunately. This includes executive function, which governs working memory and impulse control. Any non-biochemical intervention aimed at improving these traits is unlikely to succeed.
True, but “high and stable heritability” across hundreds (perhaps thousands) of attempted interventions is a pretty good description of the real-world results of education research and practice. See Freddie DeBoer’s “Education Doesn’t Work” for a brief treatment or Kathryn Paige Harden’s The Genetic Lottery for a book-length version.
So how should Armenia have retained Nagorno-Karabakh?
Use the Iraqi playbook. In the kinetic phase of the war, Armenia is probably hopeless. So make only a token show of resistance.
Before Azerbaijan takes over NK, scatter weapons caches to your co-ethnics. Train NK locals as insurgents. Make sure your border is permeable to insurgents; give them a place to rest, recover, and prepare.
Don’t let Azerbaijan consolidate its control. Use ambushes, snipers, and IEDs to discourage Azerbaijani troops from leaving their compounds. When the invaders make an enemy (and they will, lots of them), give that enemy a weapon. When the invaders make a friend, give that friend and his entire family a hideous death. Let people know that collaborators get closed-casket funerals (and then bomb the funerals).
Provoke the invaders into heavy-handed response, then put videos of the massacres on YouTube (CNN, if you can). Make their allies pay in lives and embarrassment. Portray your freedom fighters as heroes standing tall against brutal oppression.
It’s a horrible project. War usually is, and insurgency is worse than most kinds of war. But it could be done. Eventually Azerbaijan would probably leave, simply because nobody sane wants to stay in the hellhole you’ve created. Victory!
The other reason, as noted by Clausewitz, is that the enemy leader is the only person who can order its army to surrender. If you kill them, victory gets much harder to achieve.
In the first case you cite, you’ve misidentified your enemy. You’re not fighting the nation, you’re fighting some subset of it. The usual response is to identify a significant subset that opposes your enemy subset and supply them weapons. Be careful—a lot of Afghan anti-American insurgents started out as a US funded anti-Soviet insurgents. The enemy of your enemy often stops being your friend when your first enemy has fled.
For the second case—the enemy is probably not stupid or politically naive (they’re leading a country, after all). Anyone within their borders who impedes their prosecution of a war will very likely be imprisoned and replaced (and probably executed). You may see the occasional Oskar Schindler type who’s willing to take the risk and cunning enough to carry it off, but that’s pretty rare and it’s far too slim a reed to support a realistic military strategy.
Once the enemy’s tanks are rolling, the war will be decided in a matter of days or weeks—no time to go about changing the cultural attitudes of an entire population!
Contemporary war happens in two phases. The first phase involves tanks and planes and lasts days or weeks. The second phase involves putting boots on the ground and asserting the victor’s will over the victim. As you may imagine, the second phase involves a lot of human rights abuses.
America is great at the first phase, but is generally unwilling to admit that the second phase actually exists. Therefore, American wars tend to involve quick kinetic victories that degenerate into insurgencies and civil wars almost immediately. Think Iraq or Afghanistan.
Successful second-phase war looks like what China is doing in Tibet, Xinjiang, and Hong Kong. Constant direct subjugation of the losing population, lots of propaganda, and creating an environment where any hint of resistance is directly confronted and destroyed. Eventually the population just gives up and stops resisting. The process seems to be essentially complete in Tibet, well in progress in Xinjiang, and just beginning in Hong Kong.
War is hell, but there are different circles in hell.
Whoops, this was meant as a response to the post, not ChristianKI’s comment.
I think you may overestimate how much control over an enemy’s internal politics you can reasonably expect. The enemy is going to be as hardened as possible against your influence and will assuredly establish strong social norms against yielding to your influence, for values of “strong” that look like “succumbing to enemy pressure is treason, punishable by death”. Nations pull together in war.
Until roughly 1980, US corporations did lots of (paid) training. Some still do; McDonalds operates Hamburger University. They found that a lot of new hires left the company soon after training—the companies couldn’t capture the value of the training very well. Because of that they shifted toward hiring college graduates (pre-trained for general skills, if not for company specifics (which don’t travel well anyway)) and, later, unpaid internships.
IQ tests are designed to produce a bell curve with a mean at 100 and a standard deviation of 15. That’s inherent to the definition of IQ. Actual implementations aren’t perfect, but they’re not far off.
This isn’t really my field, and I see your point. The poster asked for other studies so I linked a study I’d recently seen. It’s less about me endorsing the study than about trying to provide an entry point into the relevant literature.
Can Super Smart Leaders Suffer From Too Much of a Good Thing? The Curvilinear Effect of Intelligence on Perceived Leadership Behavior and references therein.
Fair enough. I’m a chemist by training, so I described what I know.
Actually, when these theories are in competition researching phlogiston looks exactly like researching the new chemistry. What I mean is that even scientists holding on to the phlogiston theory will be aware of the results that favor the new chemistry and will design experiments specifically so that the results expected by one theory will be easily distinguishable from the predictions of the other theory. As evidence piles up, both theories will be modified by their adherents to explain the experimental results; the worse theory will require more modification but the better one probably wasn’t perfect. Eventually someone writes a big review paper that summarizes the work in the field and comes out strongly in favor of oxygen-based theories; if there’s no serious further debate the writers of future textbooks will refer to the big review paper.
I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don’t agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I’ve heard the second version stated seriously in my Bible-belt childhood; it didn’t impress me then. The first just seems … odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).
Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don’t—or to seem utterly banal (because everyone agrees with it).
Indeed. A certain coronavirus has recently achieved remarkable gains in Darwinist terms, but this is not generally considered a moral triumph. Quite the opposite, as a dislike for disease is a near-universal human value.
It is often tempting to use near-universal human values as a substitute for objective values, and sometimes it works. However, such values are not always internally consistent because humanity isn’t. Values such as disease prevention came into conflict with other values such as prosperity during the pandemic, with some people supporting strict lockdowns and others supporting a return to business as usual.
And there are words such as “justice” which refer to ostensibly near-universal human values except people don’t always agree on what that value is or what it demands in any specific case.
I think a simpler way to state the objection is to say that “value” and “meaning” are transitive verbs. I can value money; Steve can value cars; Mike can value himself. It’s not clear what it would even mean for objective reality to value something. Similarly, a subject may “mean” a referent to an interpreter, but nothing can just “mean” or even “mean something” without an implicit interpreter, and “objective reality” doesn’t seem to be the sort of thing that can interpret.
I think sociopaths are likely underrepresented in the physical sciences. Sociopaths’ defining method is the creation of social realities for others to inhabit, and it’s very hard to use that when you’re in the lab mucking with vacuum systems or running rats through mazes or whatever. Sociopaths are much more likely to be attracted to business or politics, with a few in the humanities. What sociopaths there are in science probably gravitate toward positions where they have control over tangible resources (e.g. grants).
OTOH, Aspergians like myself seem to be overrepresented in the physical sciences, partly because the relative distance from social constructs appeals to us.