For example, in many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.
This reminds me of the following passage from We Need to Talk About Kevin by Lionel Shriver:
But keeping secrets is a discipline. I never use to think of myself as a good liar, but after having had some practice I had adopted the prevaricator’s credo that one doesn’t so much fabricate a lie as marry it. A successful lie cannot be brought into this world and capriciously abandoned; like any committed relationship it must be maintained, and with far more devotion than the truth, which carries on being carelessly true without any help. By contrast, my lie needed me as much as I needed it, and so demanded the constancy of wedlock: Till death do us part.
Possible additional factor: The truth is frequently boring—it helps to add some absurdity just to get people’s attention. Once you’ve got people’s attention, proof of loyalty can come into play.
When they say things like “in cognitive science, Bayesian reasoner is the technically precise codeword that we use to mean rational mind,” they really do mean it. Move over, Aristotle!
Of course, in Catholicism, Catholic is the technically precise codeword that they use to mean rational mind. I am not a Catholic or even a Christian, but frankly, I think that if I had to vote for a dictator of the world and the only information I had was whether the candidate was an orthodox Bayesian or an orthodox Catholic, I’d go with the latter.
The only problem is that this little formula is not a complete, drop-in replacement for your brain. If a reservationist is skeptical of anything on God’s green earth, it’s people who want to replace his (or her) brain with a formula.
To make this more concrete, let’s look at how fragile Bayesian inference is in the presence of an attacker who’s filtering our event stream. By throwing off P(B), any undetected pattern of correlation can completely foul the whole system. If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian “rational mind” will conclude that the urn is entirely full of blue balls. And Bayesian inference certainly does not offer any suggestion that you should look at who’s pulling balls out of the urn and see what he has up his sleeves.
Once again, the problem is not that Bayesianism is untrue. The problem is that the human brain has a very limited capacity for analytic reasoning to begin with.
If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian “rational mind” will conclude that the urn is entirely full of blue balls.
Surely the actual Bayesian rational mind’s conclusion is that the attacker will (probably) always show a blue ball, nothing to do with the urn at all.
And Bayesian inference certainly does not offer any suggestion that you should look at who’s pulling balls out of the urn and see what he has up his sleeves.
I just facepalmed the hardest I’ve ever done while reading Unqualified Reservations. That is, not very hard—Mencius is nothing if not a charming and polite author—but still. Maybe he really ought to read at least one Sequence!
Could we start that reading with the classic Bayes’ Theorem example? Suppose 1% of women have breast cancer, 80% of mammograms on a cancerous woman will detect it, 9.6% on an uncancerous woman will be false positives. Suppose woman A gets a mammogram which indicates cancer. What are the odds she has cancer?
Now suppose women B, C, D, E, F… Z, AA, AB, AC, AD, etc., the entire patient list getting screened today, all test positive for cancer. Is the probability that woman A has cancer still 7.8%? Bayes’ rule, with the priors above, still says “yes”! You need more complicated prior probabilities (e.g. what are the odds that the test equipment is malfunctioning?) before your evidence can tell you what’s actually likely to be happening. But those more complicated, more accurate priors would have (very slightly) changed our original p(A|X) as well!
It’s not that Bayesian updating is wrong. It’s just that Bayes’ theorem never allows you to have a non-zero posterior probability coming from a zero prior, and to make any practical problem tractable everybody ends up implicitly assuming huge swaths of zero prior probability.
It’s not assuming zero probability. It’s assuming independence. Under the original model, it’s possible for all the women to get positives, but only 1% to actually have breast cancer. It’s just that a better prior would give a much higher probability.
Is there any practical difference between “assuming independent results” and “assuming zero probability for all models which do not generate independent results”? If not then I think we’ve just been exposed to people using different terminology.
Is there any practical difference between “assuming independent results” and “assuming zero probability for all models which do not generate independent results”?
No.
If not then I think we’ve just been exposed to people using different terminology.
I think it’s more than terminology. And if Mencius can be dismissed as someone who does not really get Bayesian inference, one can surely not say the same of Cosma Shalizi, who has made the same argument somewhere on his blog. (It was a few years ago and I can’t easily find a link. It might have been in a technical report or a published paper instead.) Suppose a Bayesian is trying to estimate the mean of a normal distribution from incoming data. He has a prior distribution of the mean, and each new observation updates that prior. But what if the data are not drawn from a normal distribution, but from the sum of two such distributions with well separated peaks? The Bayesian (he says) can never discover that. Instead, his estimate of the position of the single peak that he is committed to will wander up and down between the two real peaks, like the Flying Dutchman cursed never to find a port, while the posterior probability of seeing the data that he has seen plummets (on the log-odds scale) towards minus infinity. But he cannot avoid this: no evidence can let him update towards anything his prior gives zero probability to.
What (he says) can save the Bayesian from this fate? Model-checking. Look at the data and see if they are actually consistent with any model in the class you are trying to fit. If not, think of a better model and fit that.
Andrew Gelman says the same; there’s a chapter of his book devoted to model checking. And here’s a paper by both of them on Bayesian inference and philosophy of science, in which they explicitly describe model-checking as “non-Bayesian checking of Bayesian models”. My impression (not being a statistician) is that their view is currently the standard one.
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process. (I’m distancing myself from this claim, because as a non-statistician, I don’t need to have any position on this. I just want to see the position stated here.) The single-peaked prior in Shalizi’s story was merely a conditional one: supposing the true distribution to be in that family, the Bayesian estimate does indeed behave in that way. But all we have to do to save the Bayesian from a fate worse than frequentism is to widen the picture. That prior was merely a subset, worked with for computational convenience, but in the true prior, that prior only accounted for some fraction p<1 of the probability mass, the remaining 1-p being assigned to “something else”. Then when the data fail to conform to any single Gaussian, the “something else” alternative will eventually overshadow the Gaussian model, and will need to be expanded into more detail.
“But,” the soft Bayesians might say, “how do you expand that ‘something else’ into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn’t fit looks the same as what we do, why pretend it’s Bayesian inference?”
I suppose this would be Eliezer’s answer to that last question.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process.
and
“But,” the soft Bayesians might say, “how do you expand that ‘something else’ into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn’t fit looks the same as what we do, why pretend it’s Bayesian inference?”
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology—though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I’m not saying this is the only use for statistics, but I can’t seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct—i.e. let’s say Cox’s theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that’s not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don’t have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update—you don’t look at the data twice, and you’re actually performing a Bayesian update. But you don’t get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there’s always some chance that the class of models you’ve chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won’t give the best computationally feasible approximation in some situation.
So, basically, a “hardline Bayesian” should do model checking and sometimes even frequentist statistics. (Similarly, a “hardline frequentist” in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
I find this a curious thing to say. Isn’t this an argument against every possible remotely optimal computable form of induction or decision-making? Of course a good computable approximation may wind up spending lots of resources solving a problem if that problem is important enough, this is not a blackmark against it. Problems in the real world can be hard, so dealing with them may not be easy!
“Omega flies up to you and hands you a box containing the Secrets of Immortality; the box is opened by the solution to an NP problem inscribed on it.” Is the optimal solution really to not even try the problem—because then you’re trying “brute-forcing an NP-hard problem”! - even if it turns out to be one of the majority of easily-solved problems? “You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses.”
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
The right answer is simply to point out that the worst case of the optimal algorithm is going to be the worst case of all possible problems presented, and this is exactly what we would expect since there is no magic fairy dust which will collapse all problems to constant-time solutions.
I find this a curious thing to say. Isn’t this an argument against every possible remotely optimal computable form of induction or decision-making?
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that’s the way it is, well, that’s the way it is.
“Omega flies up to you
This is an argument of the form “Suppose X were true—then X would be true! So couldn’t X be true?”
“You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses.”
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time—freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point. Of course, there is the whole uncomputability zoo above all that, but computing the uncomputable is even more of a wild goose chase. “Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin’s Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer”.
This is an argument of the form “Suppose X were true—then X would be true! So couldn’t X be true?”
No, this is pointing out that if you provide an optimal outcome barricaded by a particular obstacle, then that optimal outcome will trivially be at least as hard as that obstacle.
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time—freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
This is exactly the point made for computable approximations to AIXI. Thank you for agreeing.
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
but computing the uncomputable is even more of a wild goose chase. “Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin’s Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer”.
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
No, carelessness on my part. Doesn’t affect my original point, that schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time—freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
This is exactly the point made for computable approximations to AIXI.
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI. To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
Well, yeah. Again—why would you expect anything else? Given that there exist problems which require that or worse for solution? How can a universal problem solver do any better?
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI.
Yes.
To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
The spin-off argument. Here’s a huge compendium of spinoffs of previous approaches to AGI. All very useful, but not AGI. I’m not expecting better from AIXI.
Hm, so let’s see; you started off mocking the impossibility and infeasibility of AIXI and any computable version:
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
Then you admitted that actually every working solution can be seen as a form of SI/AIXI:
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that’s the way it is, well, that’s the way it is… Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI
And now you’re down to arguing that it’ll be “very useful, but not AGI”.
I stand by the first quote. Every working solution can in a useless sense be seen as a form of SI/AIXI. The sense that a hot-air balloon can be seen as an approach to landing on the Moon.
And now you’re down to arguing that it’ll be “very useful, but not AGI”.
At the very most. Whether AIXI-like algorithms get into the next edition of Russell and Norvig, having proved of practical value, well, history will decide that, and I’m not interested in predicting it. I will predict that it won’t prove to be a viable approach to AGI.
Isn’t there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.
I can’t imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?
Isn’t there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
Yes, the question is what that middle ground looks like—how you actually come up with new models. Gelman and Shalizi say it’s a non-Bayesian process depending on human judgement. The behaviour that you rightly say is absurd, of the Bayesian Flying Dutchman, is indeed Shalizi’s reductio ad absurdum of universal Bayesianism. I’m not sure what gwern has just been arguing, but it looks like doing whatever gets results through the week while going to the church of Solomonoff on Sundays.
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
Eh. What seems AGI-ish to me is making models interact fruitfully across domains; algorithmic models to find new hypotheses for a particular set of data are not that tough and already exist (and are ‘better than people’ in the sense that they require far less computational effort and are far more precise at distinguishing between models).
The hypothesis-discovery methods are universal; you just need to feed them data. My view is that the hard part is picking what data to feed them, and what to do with the models they discover.
Edit: I should specify, the models discovered grow in complexity based on the data provided, and so it’s very difficult to go meta (i.e. run hypothesis discovery on the hypotheses you’ve discovered), because the amount of data you need grows very rapidly.
I don’t think any robot scientists would be eligible for Nobel prizes; Nobel’s will specifies persons. We’ve had robot scientists for almost a decade now, but they tend to excel in routine and easily automatized areas. I don’t think they will make Nobel-level contributions anytime soon, and by the time they do, the intelligence explosion will be underway.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses? I mean, there’s probably better approximations than that; but we have strong evidence that this one works and is computable.
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses?
Whatever approach to AGI anyone has, let them go ahead and try it, and see if it works. Ok, that would be rash advice if I thought it would work (because of UFAI), but if it has any chance of working, the only way to find out is to try it.
I’m not saying I’m willing to code that up; I’m just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we’ve got so far.
I’m just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we’ve got so far.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
Eh. I like the approach of “begin with a simple system hypothesis, and when your residuals aren’t distributed the way you want them to be, construct a more complicated hypothesis based on where the simple hypothesis failed.” It’s tractable (this is the elevator-talk version of one of the techniques my lab uses for modeling manufacturing systems), and seems like a decent approximation of Solomonoff induction on the space of system models.
It’s basically different terminology. His point is valid.
A model isn’t something you assign probability to. It’s something you use to come up with a set of prior probabilities. The model he used assumed independence. It didn’t actually assign zero probability to any result. It doesn’t assign a probability, zero or otherwise, to the machine being broken, because that’s not something that’s considered. It also doesn’t assign a probability to whether or not its raining.
From the little Moldbug I’ve been able to slog through, my main impression of him is “reader-hostile”. If he were polite maybe he would get to the effing point already.
I think his point is that you are still entirely unable to even enumerate, let alone process, all the relevant hypotheses, nor does the formula inform you of those, nor does it inform you how to deal with cyclic updates (or even that those are a complicated case), etc.
It’s particularly bad when it comes to what rationalists describe as “expected utility calculations”. The ideal expected utility is a sum of the differential effect of the actions being compared, over all hypotheses, multiplied with their probabilities… a single component of the sum provides very little or no information about the value of the sum, especially when picked by someone with a financial interest as strong as “if i don’t convince those people I can’t pay my rent”. Then, the actions themselves have an impact on the future decision making, which makes the expected value sum grow and branch out like some crazy googol-headed fractal hydra. Mostly when someone’s talking much about Bayes they have some simple and invalid expected value calculation that they want you to perform and act upon, so that you’ll be worse off in the end and they’ll be better off in the end.
A man can dream can’t he? Note he isn’t advocating nonsense as an organizing tool, much of his wackier thought is precisely around trying to make an organizing tool work as good as nonsense does. Unfortunately I don’t think he has succeed since in my opinion neocameralism is unlikely to be implemented and likely to blow up if someone did implement it.
I agree, except that some of my own wacky thought (well, it’s hardly original, of course) basically says that nonsense isn’t a “bad” at all—not for anyone whom we might reasonably call human. For example, as has been pointed out here, people have in-built hypocritical mechanisms to cope with various kinds of “faith”, but if you truly consider that you’re doing something “rational” and commonsensically correct, you’re left driving at an enormous speed without brakes, and the likely damage might be great enough that no-one should ever aspire to “rational” thinking.
Also:
On a wall in South London some Communist or Blackshirt had chalked “Cheese, not Churchill”. What a silly slogan. It sums up the psychological ignorance of these people who even now have not grasped that whereas some people would die for Churchill, nobody will die for cheese.
Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug’s diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don’t have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.
Universalism is the reason why common-sense proposals like those of Greg Cochran will never be official policy.
From the Greg Cochran link:
A government with consistent and lasting policies could select for intelligence and achieve striking results in a few centuries, maybe less. But no state ever has, and no existing government seems interested.
It’s worth pointing out that at least part of the opposition to government-run eugenics programs is rational distrust that the government will not corrupt the process. If a country started a program of tax breaks for high-IQ people having children, and perhaps higher taxes for low-IQ people having children, a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is]. It’s a similar rationale to the one against literacy tests for voting: sure, maybe illiterate people can’t be informed voters, but trusting the government to decide who’s too illiterate to vote leads to perverse incentives.
a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is].
Absolutely. It would start with: “Everyone (accepted as an expert by our party) agrees that the classical IQ tests developed by psychometric methods are too simple and they don’t cover the whole spectrum of human intelligence. Luckily, here is a new improved test developed by our best experts that includes the less mathematical aspects of intelligence, such as having a correct attitude towards insert political topic. Recognizing the superiority of this test to the classical tests already gives you five points!”
Also, governments are notoriously bad at making broad and costly social policies that will only give a return on investment “in a few centuries or less”. We’re not talking just beyond the next elections, the party, the politicians, even the whole state may not even exist by then.
Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people.
That seems a little bit simplistic. How many problems have been caused by smart people attempting to implement plans which seem theoretically sound, but fail catastrophically in practice? The not-so-smart people are not inclined to come up with such plans in the first place. In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
While people may not agree with me on this, I find the theory of “rational addiction” within contemporary economics to carry many of the hallmarks of this way of thinking. It is mathematically justified using impressively complex models and selective post-hoc definitions of terms and makes a number of empirically unfalsifiable claims. You would have to be fairly intelligent to be persuaded by the mathematical models in the first place, but that doesn’t make it right.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational. The problems caused by the former are lesser in scale.
The theory of “rational addiction” seems like an example that for any (consistent) behavior you can find such utility function that this behavior maximizes it. But it does not mean that this is really a human utility function.
it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
True. I guess it’s just that the consequences of such actions can often lead to a large amount of negative utility according to my own utility function, which I like to think of as more universalist than egoist. But people who are selfish, rational and intelligent can, of course, cause severe problems (according to the utility functions of others at least). This, I gather, is fairly well understood. That’s probably why those characteristics describe the greater proportion of Hollywood villains.
Hollywood villains are gifted people who pathologically neglect their self-deception. With enough self-deception, everyone can be a hero of their own story. I would guess most authors of fashionable nonsense kind of believe what they say. This is why opposing them would be too complicated for a Hollywood script.
In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational
And yet, it’s the “Universalist” system that allows Jews to not get exterminated. I think the cognitive and epistemological flaws of “Universalism” kinda makes some people ignore the fact that it’s the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
Was (non-Universalist) Nazi Germany more open to accepting Jew-produced science than the “Universalist” West was? Or is the current non-Universalist Arab world more open to such? Were the previous feudal systems better at accepting atheists or Jewish people? Which non-universalist (and non-Jewish) system was actually better than “Universalism” at recognizing Jewish contributions or intelligence, that you would choose to criticize Universalism for being otherwise? Or better at not killing heretics?
Let’s keep it simple—which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
As for Moldbug’s diagnosis, I’m unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I’m not sure Moldbuggery has much predictive capacity.
“Universalism” kinda makes some people ignore the fact that it’s the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
I agree. In my mind this is its great redeeming feature and the main reason I think I still endorse universalism despite entertaining much of the criticism of it. At the end of the day I still want to live in a Western Social Democracy, just maybe one that has a libertarian (and I know this may sound odd coming from me) real multicultural bent with regards to some issues.
And yet, it’s the “Universalist” system that allows Jews to not get exterminated.
The same is true of the Roman and Byzantine empire. The Caliphate too. Also true of Communist regimes. Many absolute monarchies now that I think about it. Also I’m pretty sure the traditional Indian cast system could keep Jews safe as well.
If Amy Chua is right democracy (a holy word of universalism) may in the long run put market dominant minorities like the Jews more at risk than some alternatives. Introducing democracy and other universalist memes in the Middle East has likely doomed the Christian minorities there for example.
Let’s keep it simple—which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
I’m not quite sure why particularly the Jewish people matter so very much to you in this example. I’m sure you aren’t searching for the trivial answer (which would be “in any ancient and medieval Jewish state or nation”).
If you are using Jews here as an emblem of invoking the horrors of Nazism, can’t we at least throw a bone to Gypsy and Polish victims? And since we did that can we now judge Communism by the same standard? Moldbug would say that Communism is just a country getting sick with a particularly bad case of universalism.
The thing is Universalism as it exists now dosen’t seem to be stable, the reason one sees all these clever (and I mean clever in the bad, overly complicating, overly contrarian sense of the word) arguing in the late 2000s against “universalism” online is because the comfortable heretic tolerating universalism of the second half of the 20th century seems to be slowly changing into something else. They have no where else to go but online. The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking. As technology, that enables the surveillance of citizens and enforcement of social norms by peers, advances there dosen’t seem to be any force really counteracting it. If you transgress, if you are a heretic in the 21st century, you will remain one for your entire life as your name is one google search away from your sin. As mobs organize via social media or apps become more and more a reality, a political reality, how long will such people remain physically safe? How do you explain to the people beating you that you recanted your heresy years ago? Recall how pogroms where usually the affair of angry low class peasants. You don’t need the Stasi to eliminate people. The mob can work as well. You don’t need a concentration camp when you have the machete. And while modern tech makes the state more powerful since surveillance is easier, it also makes the mob more powerful. Remaining under the, not just legal, but de facto, protection of the state becomes more and more vital. The room for dissent thus shrinks even if stated ideals and norms remain as they where before.
And I don’t think they will remain such. While most people carrying universalist memes are wildly optimistic with “information wants to be free” liberty enhancing aspect of it, the fact remains that this new technology seems to have also massively increased the viability and reach of Anarcho-Tyranny.
The personal psychological costs of living up to universalist ideals and internalizing them seem to be rising as well. To illustrate what I mean by this, consider the practical sexual ethics of say Elizabethan England and Victorian England. On the surface and in their stated norms they don’t differ much, yet the latter arguably uses up far more resources and a places a greater cognitive burden of socialization on its members to enforce them.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren’t that different in stated ideals, but the practical costs have arguably risen.
I’m not quite sure why particularly the Jewish people matter so very much to you in this example.
nykos’ was the one who used the example of Jewish superior intelligence not getting acknowdged as such by Universalism. My point was that was there have been hardly any non-Universalist systems that could even tolerate Jewish equal participation, let alone acknowledged Ashkenazi superiority.
The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking.
I see no proof of that. What economic benefits and comforts? Sure, real wages in Western countries have stopped growing around the 1970s, but e.g. where welfare programs are being cut following the current crisis, it’s certainly not the liberals but economically conservative governments championing the cuts.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren’t that different in stated ideals, but the practical costs have arguably risen.
I don’t understand. Do you mean prestigious norms like “never avoid poor neighbourhoods for your personal safety, because it’s supposedly un-egalitarian”, or what? What other norms like that exist that are harmful in daily life?
but e.g. where welfare programs are being cut following the current crisis, it’s certainly not the liberals but economically conservative governments championing the cuts.
What’s happening is, to paraphrase Thacher, that governments are running out of other people’s money. Yes, conservative parties are more willing to acknowledge this fact, but liberal parties don’t have any viable alternatives and it was their economic policies that lead to this state of affairs.
The places that are being hardest hit have been ruled by left wing parties for most of the time since at least the 1970s. Also in these places the right wing parties aren’t all that right wing.
Let’s keep it simple—which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
You’ve got to make it more general, that’s where it gets interesting! Speaking frankly, from the selfish viewpoint of a typical Western person, the Universalist system has been better than any other system at everything for more than a century, especially at the quality and complexity of life for the average citizen. Of course, Moldbug’s adherents would argue that there’s no dependency between these two unique, never-before-seen facts of civilization—universalist ideology and an explosive growth in human development for the bottom 90% of society. They’d say that both are symptoms of more rapid and thoroughly supported technological progress than elsewhere.
Let’s concede that (although there are reasons to challenge it—see e.g. Weber’s The Protestant Ethic and the Spirit of Capitalism, an early argument that religion morphing into a secular quasi-theocracy is what gave the West its edge). Okay, so if both things are the results of our civilization’s unique historical path… then, from an utilitarian POV, the cost of universalism is still easily worth paying! We know of no society that advanced to an industrial and then post-industrial state without universailsm, so it would be in practice impossible to alter any feature of technical and social change to exclude the dominance of universalist ideology but keep the pace of “good” progress. Then, even assuming that universalist ideology is single-handedly responsible for the entirety of the 20th century’s wars and mass murder (and other evils), it is still preferable to the accumulated daily misery of the traditional pre-industrial civilization—especially so for everyone who voted “Torture” on “Torture vs Specks”! (I didn’t, but here I feel differently, because it’s “Horrible torture and murder” vs “Lots and lots of average torture”.)
Moldbug isn’t arguing we should get rid of some technology and its comfort in order to also get rid of universalism and he certainly does recognize both as major aspects of modernity, no he is saying that precisely technological progress now enables us to get rid of the parasitic aspect of modernity “universalism”. One can make a case that since it inflames some biases, it is slowing down technological progress and the benefits it brings. Peter Thiel is arguably concerned precisely by this influence when he talks of a technological slowdown. Universalism not only carries opportunity costs, it has historically often broken out in Luddite strains. Consider for example something like the FDA. Recall what criticism of that institution are often heard on LW, yet aren’t these same criticism when consistently applied basically hostile to the Cathedral?
Whether MM is right or wrong what you present seems like a bit of a false dilemma. You certainly are right that we haven’t seen societies that advance to a post-industrial or industrial state without at least some influence of universalism but it is hard to deny that we do observe varying degrees of such penetration. Moldbug’s idea is that even if we can’t use technology to get rid of the memeplex in question by social manoeuvring we can still perhaps find a better trade off by not taking “universalism” so seriously. The vast majority of people, the 90% you invoke, may be significantly better of with a world where every city is Singapore than a world where every city is London.
It is no mystery which of these two is more in line with universalist ideals.
In the case of Singapore vs. London (implicitly including the governing structure of Britain since London isn’t a city state)? A few I can think of straight away:
Democratic decision making. Therapeutic rather than punitive law enforcement. Lenient punishment of crime. Absence of censorship.
Naturally all of these aren’t fully realized in London either. Britian dosen’t have real free speech, yet it has much more of it than Singapore. Britain has (in my opinion) silly and draconian anti-drug laws, but it dosen’t execute people for smuggling drugs. London doesn’t have corporal or capital punishment. The parties in Britain are mostly the same class of people, yet at least Cerberus (Lib/Lab/Con) has three heads, you get to vote the one that promises to gnaw at you the least, Singapore is democratic in form only, and it is a very transparent cover. Only one party has a chance of victory, and it has been that way and will remain that way for some time.
Yet despite all these infractions against stated Western ideals, life isn’t measurably massively worse in Singapore than in London. And Singapore seems to work better as a multi-ethnic society than London. The world is globalizing, de facto multiculturalism is the destined future of every city from Vladivostok to Santiago so the Davos men tell us. No place like Norway or Japan in our future, but elections where we will see ethnic blocks and identity politics. I don’t know about you but I prefer Lee Kuan Yew to that mess of tribal politics. Which city would deal better with a riot? Actually which city is more likely to have a riot? Recall what Lee said in his autobiography and interviews he learned from the 1960s riots. Did it work? It sure looks like it did. Also recall from what Singapore started, and where surrounding Malaysia from which it diverged is today. What is the better model to pull the global south out of poverty? What is the better model to have the worlds people live side by side? Which place will likely be safer, more liveable and more prosperous in 20 years time?
It seems in my eyes that Singapore is clearly winning in such a comparison. Yet clearly it does so precisely by ignoring several universalist ideals. Strangely they didn’t seem to have needed to give up iPods and other marvels of modern technology to do it either.
Yet despite all these infractions against stated Western ideals, life isn’t measurably massively worse in Singapore than in London.
Taboo “worse”! If by life not being “worse” you mean the annual income or the quality of healthcare or the amount of street crime, maybe it’s so. If one values e.g. being able to contribute to a news website without fear of fines or imprisonment (see e.g. Gibson’s famous essay where he mentions that releasing information about Singapore’s GDP could be punished with death), or not fearing for the life of a friend whom you smoke marijuana with, or being able to think that the government is at least a little bit afraid of you (this not necessarily being real, just a pleasant delusion to entertain, like so many others we can’t live without)… in short, if one values the less concrete and material things that speak to our more complex instincts, it’s not nearly so one-sided.
That’s why I dislike utilitarianism; it says without qualification that a life always weighs the same, whatever psychological climate it is lived in (the differences are obvious as soon as you step off a plane, I think—see Gibson’s essay again), and a death always weights the same, whether you’re killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore), et cetera, et cetera… It’s, in the end, not very compatible with the things that liberals OR classical conservatives love and hate. Mere safety and prosperity are not the only things a society can strive for.
If by life not being “worse” you mean the annual income or the quality of healthcare or the amount of street crime, maybe it’s so.
Yes. But these are incredibly important things to hundreds of millions of people alive today drowning in violence, disease and famine. What do spoiled first world preferences count against such multitudes?
And you know what, I think 70% of people alive today in the West wouldn’t in practice much miss a single thing you mention, though they might currently say or think they would.
There’s a threshold where violence, disease and hunger stop being disastrous in our opinion (compare e.g. post-Soviet Eastern Europe to Africa), and that threshold, as we can see, doesn’t require brutal authoritarianism to maintain, or even to achieve. Poland has transitioned to a liberal democracy directly after the USSR fell, although its economy was in shambles (and it had little experience of liberalism and democracy before WW2), Turkey’s leadership became softer after Ataturk achieved his primary goals of modernization, etc, etc. There’s a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the latter is an acceptable cost for attempting liberal policies to me. I accept that the former might require harsh measures to overcome, but I’d rather see those measures taken by an internally liberal colonial power (like the British Empire) than a local regime.
There’s a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the second is an acceptable cost for attempting liberal policies to me.
The actual real people living there, suppose you could ask them, which do you think they would chose? And don’t forget those are mere stated preferences, not revealed ones.
If you planted Singapore on their borders wouldn’t they try to move there?
Sure, Singapore is much better than Africa; I never said otherwise! However, if given choice, the more intelligent Africans would probably be more attracted to a Western country, where their less tangible needs (like the need for warm fuzzies) would also be fulfilled. Not many Singaporeans probably would, but that’s because the Singaporean society does at least as much brainwashing as the Western one!
I don’t understand why you think “warm fuzzies” are in greater supply in London than in Singapore. They are both nice places to live, or can be, even in their intangibles. London-brainwashing is one way to inoculate yourself against Singapore-brainwashing, but perhaps there is another way?
Have you been to Singapore for any amount of time? I haven’t (my dad had, for a day or so, when he worked on a Soviet science vessel), but I trust Gibson and can sympathize with his viewpoint. At the very least I observe that it does NOT export culture or spread memes. These are not the signs of a vibrant and sophisticated community!
At the very least I observe that it does NOT export culture or spread memes.
What could you mean by this that isn’t trivially false?
I haven’t read the Gibson article (but I will). I know that “disneyland” and “the death penalty” are both institutions that are despised by a certain cohort, but they are not universally despised and their admirers are not all warmfuzzophobic psychos. Artist-and-writer types don’t flock to Singapore, but they don’t flock to Peoria Illinois either do they?
Artist-and-writer types don’t flock to Singapore, but they don’t flock to Peoria Illinois either do they?
Downvoted without hesitation.
If you have the unvoiced belief that cultural products (especially high-quality ones) and memes are created by some specific breed of “artist-and-writer types” (wearing scarves and being smug all the time, no doubt!), then I’d recommend purging it, seeing as it suggests a really narrow view of the world. A country can have a thriving culture not because artistic people “flock” there, but because they are born there, given an appropriate education and allowed to interact with their own roots and community!
By your logic, “artist-and-writer types” shouldn’t just not flock to, but actively flee the USSR/post-Soviet Russia. And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
Notice how this more or less refutes the argument you tried to make in the grandparent.
I’m not making the argument that liberal democracy directly correlates to increasing the cultural value produced. Why else would I defend Iran in that particular regard? No, no, the object of my scorn is technocracy (at least, human technocracy) and I’m even willing to tolerate some barbarism rather than have it spread over the world.
You seem to have read some hostility towards artists and writers into my comment, probably because of “types” and “flock”? These are just writing tics, I intended nothing pejorative.
I hold no such belief, and I’m glad you don’t either. I only want to emphasize my opinion that Singapore does have a thriving culture, even if it does not have a thriving literary or film industry. But since you admit you don’t know a lot about it I’m curious why you have so much scorn for the place? A city can have something to recommend itself even if it hasn’t produced a good author or a good movie.
In short, well, yeah, I hold more “formal” and “portable” culture such as literature, music or video to have relatively more value than the other types of “culture”, such as local customs and crafting practices and such—which I assume you meant by “thriving culture” here. All are necessary for harmonious development, but I’d say that e.g. a colorful basket-weaving tradition in one neighborhood which is experienced/participated in by the locals is not quite as important and good to have as an insightful and entertaining story, or a beautiful melody—the latter can still have relevance continents or centuries apart. Some African tribe can also have a thriving culture like that, but others can’t experience it without being born there, it can be unsustainable in the face of technical progress, it can interfere with other things important for overall quality of life (trusting a shaman about medicine can be bad for your health), etc. Overall, you probably get what I’m talking about.
Sure, that’s biased and un-PC in a way, but that’s the way that I see the world.
(I don’t have any scorn for Singapore as a nation and a culture, I just don’t care much for a model of society imposed upon it by the national elites in the 20th century that, unlike broadly similar societies in e.g. Japan or even China, doesn’t seem to produce those things I value. Even if its GDP per capita is now 50% or so higher than somewhere else. Heck, even Iran—a theocracy that’s not well-off and behaves rather irrationally—has been producing acclaimed literature and films, despite censorship.)
It seems to me that if you are talking about artistic achievements that have stood the test of centuries, then you are talking almost exclusively about the west, which I agree is utterly dominant in cultural exports. What I have in mind when I say “Singapore culture is thriving” is that it’s a city filled with lovely people going about their business. You could appreciate Singapore culture because you find muslim businessmen or guest worker IT types agreeable—maybe you like their jokes. You could hate Singapore culture if you instead found muslim businessmen to be vacant and awful. But couldn’t we allow that the intelligent african that kicked the discussion off might have either taste? Then we should find out what his tastes are before recommending that he choose London over Singapore.
I read “Disneyland with the death penalty.” Gibson’s not a very good travel-writer, there’s hardly any indication in the article that he spoke to anyone while he was there.
broadly similar societies in e.g. Japan or even China, doesn’t seem to produce those things I value
You’re not being fair. Singaporeans would have surely produced something to your tastes, if there were a billion of them and their country were two thousand years old.
When Turkey was modernizing it sure as heck was looking towards Europe for examples, it just didn’t implement democratic mechanisms straight away and restricted religious freedom. And if you look at Taiwan, Japan, Ghana, etc… sure, they might be ruled by oligarchic clans in practice, but other than that [1] they have much more similarities than differences with today’s Western countries! Of course a straight-up copy-paste of institutions and such is bound to fail, but a transition with those institutions, etc in mind as the preferred end state seems to work.
[1] Of course, Western countries are ruled by what began as oligarchic clans too, but they got advanced enough that there’s a difference. And, for good or ill, they are meritocratic.
I don’t care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.
even Turkey did NOT require some particular ruthlessness to modernize.
Could you explain the meaning of this sentence please. I’m not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk’s modernizing reforms. I assume that’s not the case, right?
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on. At least, that’s what Wikipedia tells me. He didn’t purge political opponents except for one incident where they were about to assassinate him, he maintained a Western facade over his political maneuvering (taking pages from European liberal nationalism of the previous century), etc, etc.
To extent that this is a discussion of quality of life and attractiveness of a country, as opposed to what is strictly speaking necessary for development, it’s worth remembering the Armenian genocide.
There’s no evidence that Ataturk was more complicit in that than, say, many respected public servants in 50s-60s Germany were complicit in the Holocaust. Nations just go insane sometimes, and taboos break down, and all that. It takes a hero to resist.
I feel pretty confident that Niall Ferguson, in his The War of the World, claims that Ataturk directly oversaw at least one massacre; I don’t have my copy on hand, however. Also, the Armenian National Institute claims that Ataturk was “the consummator of the Armenian Genocide.”
Also, Israel Charney (the founder of the International Association of Genocide Scholars) says:
It is believed that in Turkey between 1913 and 1922, under the successive regimes of the Young Turks and of Mustafa Kemal (Ataturk), more than 3.5 million Armenian, Assyrian and Greek Christians were massacred in a state-organized and state-sponsored campaign of destruction and genocide, aiming at wiping out from the emerging Turkish Republic its native Christian populations.
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on.
Really Ataturk was less harsh than Industrial Revolution-age Britain? I find this highly unlikely (unless your taking about their colonial practices in which case the Armenian genocide is relevant). I think the reason you’re overestimating the relative harshness of Britain is that Britain had more freedom of speech than other industrializing nations and thus its harshness (such as it was) is better documented.
(That’s just after a fifteen-minute search. By the way, haven’t you read Dickens? He gives quite a vivid contemporary account of social relations, although dramatized.)
Are you claiming that similar and worse things didn’t happen in Turkey?
With the exception of the Armenian genocide (which is comparable in vileness to many things, including the actions of that wonder of private enterprise, the East India Company) - yes. Not during the late 19th and 20th century, I mean. Turkish landlords might’ve been feudals, but they didn’t outright steal the entirety of their tenants’ livelihood from under them.
Let me get this straight: you’re trying to argue that Britain was harsh because some people expressed opposition to a law you like?
The other way around! Many respected people hated and denounced it so much, it famously prompted Dickens to write Oliver Twist.
I knew perfectly well about all of those except the Great Famine before searching, thank you very much! (I used to think there was only one Irish famine.) That’s why I felt confident in saying that 20th century Turkey was not as bad! “Fifteen-minute search” referred to a search for articles to show in support of my argument, not an emergency acquisition of knowledge for myself.
It didn’t fully come into the “Universalist” sphere, ideologically and culturally, until its defeat in WW2, and the most aggressive and violent of its actions were committed in a struggle for expansion against Western dominiance.
Konkvistador’s argument would be that it wouldn’t of been able to modernize nearly as effectively if it had come into the “Universalist” sphere before industrializing.
Maybe, I don’t know. On the other hand, maybe it would’ve avoided conquest and genocide if it had come into that sphere before industrializing.
Or maybe my premise above is wrong and its opening in the Meiji era did in fact count as contact with “Universalism”—note that America and Britain’s influence had been considerable there, and Moldbug certainly says that post-Civil War U.S. and post-Chartist Britain (well, he says post-1689, but the Chartist movement definitely was a victory for democracy[1]) were dominated by hardcore Protestant “Universalism”.
1- Although its effects were delayed by some 20 years.
whether you’re killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore)
You seem to have an overly romantic view of criminals if you think they never kill with malice.
Heck when the government doesn’t keep them in check criminal gangs operate like mini-governments that are much worse in terms of warm fuzzies then even Singapore.
As for Moldbug’s diagnosis, I’m unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I’m not sure Moldbuggery has much predictive capacity.
Actually Modlbug’s diagnosis does provide decent predictive power: In the West at least Whigh history shall continue. The left shall continue to win nearly all battles over what the stated values and norms of our society should be (at least outside the economic realm).
Naturally Whig history makes the same prediction of itself, but the model it uses to explain itself seems built more for a moral universe than the one we inhabit. Not only that I find the stated narrative of Whig history has some rather glaring flaws. MM’s theories win in my mind simply because they seem a explanation of comparable or lower complexity in which I so far haven’t found comparably problematic flaws.
As for Moldbug’s diagnosis, I’m unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America?
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
The prediction Moldbug made was “no civil war in Syria”; not that there would be a civil war but Assad would manage to endure it.
Indeed in the post I link to, Mencius Moldbug seemed to be predicting that Qaddafi would endure the civil war too; as Moldbug made said post at a point in time in which the war was turning to Qaddafi’s favour, and Moldbug wrongly predicted that the West would not intervene to perform airstrikes.
One example of a bad consequence of Universalism is the delay of the Singularity.
Not proven. It seems to me that people wildly overdo even the prejudices they have evidence for, so we don’t know how much is lost due to excessive prejudice compared to how much is lost due to insufficient prejudice.
My impression is that we aren’t terribly good yet at understanding how traits which involve many genes play out, whether political correctness is involved or not.
Very true. I think most HBD proponents are somewhat overconfident of their conclusions (though most of them seem more likely than not). But what I think he was getting at is that we would have great difficulty acknowledging if it was so and that any scientist that wanted to study this is in a very rough spot.
Unlike say promotion of the concept of human caused climate change which has the support of at least the educated classes, it may be impossible for our society to assimilate such information. It seems more likely that they would rather discredit genetics as a whole or perhaps psychometry or claim the scientists are faking this information because of nefarious motives. This suggest there exists a set of scientific knowledge that our society is unwilling or incapable of assimilating and using in a manner one would expect from a sane civilization.
We don’t know what we don’t know, we do know we simply refuse to know some things. How strong might our refusal be for some elements of the set? What if we end up killing our civilization because of such a failure? Or just waste lives?
I don’t know if you could get away with studying the sort of thing you’re describing if you framed it as “people who are good at IQ tests” or “people who have notable achievements”, rather than aiming directly at ethnic/racial differences. After all, the genes and environment are expressed in individuals.
It’s conceivable but unlikely that the human race is at risk because that one question isn’t addressed.
It’s conceivable but unlikely that the human race is at risk because that one question isn’t addressed.
I think I didn’t do a good job of writing the previous post. I was trying to say that regardless what the truth is on that one question (and I am uncertain on it, more so than a few months ago), it demonstrates there are questions we as a society can’t deal with.
I wasn’t saying that not understanding the genetic basis of intelligence is a civilization killer (I didn’t mention species extinction, though that is possible as well), which in itself is plausible if various people warning about dysgenics are correct, but that future such questions may be.
I argued that since reality is entangled and our ideology has no consistent relationship with reality we will keep hitting on more and more questions of this kind (ones that our society can’t assimilate) and that knowing the answer to some such questions may turn out to be important for future survival.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology or challenging our interpretation of history. It would be directly relevant to FAI yet it would make some predictions that people will refuse to believe because of tribal affiliation or because it is emotionally too straining.
I argued that since reality is entangled and our ideology has no consistent relationship with reality...
I think this statement is too strong. Our ideology doesn’t have a 100% consistent relationship with reality, true, but that’s not the same as 0%.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology...
What, sort of like Hari Seldon’s psychohistory ? Regardless of whether our society can absorb it or not, is such a thing even possible ? It may well be that group behavior is ultimately so chaotic that predicting it with that level of fidelity will always be computationally prohibitive (unless someone builds an Oracle AI, that is). I’m not claiming that this is the case (since I’m not a sociologist), but I do think you’re setting the bar rather high.
That hasn’t stopped us from doing incredible feats of artificial selection using phenotype alone. You can work faster and better the more you understand a system on the genetic level, but it’s hardly necessary.
I agree and have for some time, I didn’t mean to imply otherwise. Especially this is I think terribly important:
Even though his prescription may be lacking (here is some criticism to neocameralism [] his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug’s diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
But currently there is nothing remotely approaching an actionable political plan, so I advocated doing what little good one can despite Cryptocalvinism’s iron grasp on the minds of a large fraction of mankind. As Moldbug says Universalism has no consistent relation to reality. A truly horrifying description of reality if it is accurate, since existential risk reduction eventually will become entangled with some ideologically charged issue or taboo.
I wish I could be hopeful but my best estimate is that humanity is facing a no win scenario here.
all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced
Another thing I’d like to ask you! What are those bad things in your estimate? Or, rather, what areas are we talking about? Are you mainly concerned with censorship, academic dishonesty, bad prediction-making, other theory-related flaws? Or do you find some concrete policy really awful for those epistemic reasons, like state welfare programs, ideological pressure on illiberal regimes or immigration from poor countries? (I chose those examples because I’m in favor of all three, with caveats.)
I know you’re against universal suffrage, but that’s more or less meta-level; is there something you really loathe that directly concerns daily life, its quality, comfort and freedoms? Of course, I know about the policy preferences Mencius himself draws from his doctrine, but his beliefs are… idiosyncrasic: e.g. I don’t think you’d agree with him that selling oneself and one’s future children into slavery should be at all acceptable or tolerated.
Of course, I know about the policy preferences Mencius himself draws from his doctrine
That’s more than I’ve managed to get from my reading of him. I get no picture from his writings about what he wants life to be like—“daily life, its quality, comfort and freedoms ”—under his preferred regime, only about what he doesn’t want life to be like under the current regimes.
True, it’s in bits and pieces; but see e.g. the Patchwork series and try some other posts at random. Basically, a good example of his preferences is the “total power, no influence or propaganda” model of Patchwork; in his own words, the Sovereign’s government wouldn’t censor dissenters because it has nothing to fear from them. Sure, I strongly doubt it would work that way, even with a perfectly rational sovereign (the blog post linked to above provides some decent criticism of that from an anarchist POV). But we nonetheless can conclude that MM would like a comfortable, rich society with liberal mores (although he does all the conservative elderly grumbling about the supposed irresponsibility and flighty behavior of Westerners today [1]) where he wouldn’t ever have to worry about tribal power games or such—enforced with an iron fist, for selfish reasons of productivity and public image, and totally un-hypocritical about that. He’s okay with some redistribution of wealth (the sovereign giving money to private charities it finds worthy, which, being driven mainly by altruism, automatically care for everyone better than a disinterested bureaucracy—again, I’m a little skeptical). Another thing he likes to say is that the capacity for violence within society should be supremely concentrated and overwhelming, and then the rational government supposedly wouldn’t have to actually use it. And then there are the totally contrarian things like his tolerance for indentured servitude on ideological grounds (look up his posts on “pronomianism”), which, along with his less disagreeable opinions, could well stem from his non-neurotypical (I take Konkvistador’s word, and my impressions) wiring.
[1] When he repeats some trite age-old bullshit about “declining personal morality”—while cheering for no-holds-barred ruthless utilitarianism—that’s when I tolerate him least.
The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another.
There’s an important question here: WHY do you think people dislike that so much that they’re willing to subvert entire fields of knowledge to censor those inquiries? Please ponder that carefully and answer without any mind-killed screeds, ok?
(I’m not accusing you in advance, it’s just that I’ve read about enough such hostile denunciations from the “Internet right” who literally say that “Universalists/The Left/whoever” simply Hate Truth and like to screw with decent society. Oh, and the “Men’s Rights” crowd often suggests that those who fear inequality like that just exhibit pathetic weak woman-like thinking that mirrors their despicable lack of masculinity in other areas. And Cthulhu help you if you are actually a woman who thinks like that! Damn, I can’t stand those dickheads.)
Of course, I’d like others here to also provide their perspective on probable reasons for such behavior! Don’t pull any punches; if it just overwhelmingly looks like people with my beliefs are underdeveloped mentally and somewhat insane, I’ll swallow that—but avoid pettiness, please.
After reading that sentence, I expected some rather radical eugenics advocacy. Then I followed that link and saw that all those suggestions (except maybe for cloning, but we can hardly know about that in advance) are really “nice” and inoffensive. Seriously, I think that if even I, who’s pretty damn orthodox and brainwashed—a dyed-in-the-wool leftist, as it is—haven’t felt a twinge—than you must be overestimating how superstitious and barbaric an educated Universalist is in regards to that problem.
--Mencius Moldbug, on belief as attire and conspicuous wrongness.
Source.
This reminds me of the following passage from We Need to Talk About Kevin by Lionel Shriver:
Possible additional factor: The truth is frequently boring—it helps to add some absurdity just to get people’s attention. Once you’ve got people’s attention, proof of loyalty can come into play.
Also relevant.
This reminds me of Baudrillard, I might come back in a few days with a Baudrillard rationality quote.
More quotes by Mencius Moldbug:
They are all from the article A Reservationist Epistemology
Surely the actual Bayesian rational mind’s conclusion is that the attacker will (probably) always show a blue ball, nothing to do with the urn at all.
Solomonoff prior gives nonzero probability to the attacker deceiving us. But humans are not very good at operating with such probabilities precisely.
I just facepalmed the hardest I’ve ever done while reading Unqualified Reservations. That is, not very hard—Mencius is nothing if not a charming and polite author—but still. Maybe he really ought to read at least one Sequence!
Could we start that reading with the classic Bayes’ Theorem example? Suppose 1% of women have breast cancer, 80% of mammograms on a cancerous woman will detect it, 9.6% on an uncancerous woman will be false positives. Suppose woman A gets a mammogram which indicates cancer. What are the odds she has cancer?
p(A|X) = p(X|A)p(A)/(p(X|A)p(A)+p(X|~A)p(~A)) = 7.8% Hooray?
Now suppose women B, C, D, E, F… Z, AA, AB, AC, AD, etc., the entire patient list getting screened today, all test positive for cancer. Is the probability that woman A has cancer still 7.8%? Bayes’ rule, with the priors above, still says “yes”! You need more complicated prior probabilities (e.g. what are the odds that the test equipment is malfunctioning?) before your evidence can tell you what’s actually likely to be happening. But those more complicated, more accurate priors would have (very slightly) changed our original p(A|X) as well!
It’s not that Bayesian updating is wrong. It’s just that Bayes’ theorem never allows you to have a non-zero posterior probability coming from a zero prior, and to make any practical problem tractable everybody ends up implicitly assuming huge swaths of zero prior probability.
It’s not assuming zero probability. It’s assuming independence. Under the original model, it’s possible for all the women to get positives, but only 1% to actually have breast cancer. It’s just that a better prior would give a much higher probability.
Is there any practical difference between “assuming independent results” and “assuming zero probability for all models which do not generate independent results”? If not then I think we’ve just been exposed to people using different terminology.
No.
I think it’s more than terminology. And if Mencius can be dismissed as someone who does not really get Bayesian inference, one can surely not say the same of Cosma Shalizi, who has made the same argument somewhere on his blog. (It was a few years ago and I can’t easily find a link. It might have been in a technical report or a published paper instead.) Suppose a Bayesian is trying to estimate the mean of a normal distribution from incoming data. He has a prior distribution of the mean, and each new observation updates that prior. But what if the data are not drawn from a normal distribution, but from the sum of two such distributions with well separated peaks? The Bayesian (he says) can never discover that. Instead, his estimate of the position of the single peak that he is committed to will wander up and down between the two real peaks, like the Flying Dutchman cursed never to find a port, while the posterior probability of seeing the data that he has seen plummets (on the log-odds scale) towards minus infinity. But he cannot avoid this: no evidence can let him update towards anything his prior gives zero probability to.
What (he says) can save the Bayesian from this fate? Model-checking. Look at the data and see if they are actually consistent with any model in the class you are trying to fit. If not, think of a better model and fit that.
Andrew Gelman says the same; there’s a chapter of his book devoted to model checking. And here’s a paper by both of them on Bayesian inference and philosophy of science, in which they explicitly describe model-checking as “non-Bayesian checking of Bayesian models”. My impression (not being a statistician) is that their view is currently the standard one.
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process. (I’m distancing myself from this claim, because as a non-statistician, I don’t need to have any position on this. I just want to see the position stated here.) The single-peaked prior in Shalizi’s story was merely a conditional one: supposing the true distribution to be in that family, the Bayesian estimate does indeed behave in that way. But all we have to do to save the Bayesian from a fate worse than frequentism is to widen the picture. That prior was merely a subset, worked with for computational convenience, but in the true prior, that prior only accounted for some fraction p<1 of the probability mass, the remaining 1-p being assigned to “something else”. Then when the data fail to conform to any single Gaussian, the “something else” alternative will eventually overshadow the Gaussian model, and will need to be expanded into more detail.
“But,” the soft Bayesians might say, “how do you expand that ‘something else’ into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn’t fit looks the same as what we do, why pretend it’s Bayesian inference?”
I suppose this would be Eliezer’s answer to that last question.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
In response to:
and
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology—though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I’m not saying this is the only use for statistics, but I can’t seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct—i.e. let’s say Cox’s theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that’s not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don’t have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update—you don’t look at the data twice, and you’re actually performing a Bayesian update. But you don’t get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there’s always some chance that the class of models you’ve chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won’t give the best computationally feasible approximation in some situation.
So, basically, a “hardline Bayesian” should do model checking and sometimes even frequentist statistics. (Similarly, a “hardline frequentist” in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
See my similar comments here and here.
I find this a curious thing to say. Isn’t this an argument against every possible remotely optimal computable form of induction or decision-making? Of course a good computable approximation may wind up spending lots of resources solving a problem if that problem is important enough, this is not a blackmark against it. Problems in the real world can be hard, so dealing with them may not be easy!
“Omega flies up to you and hands you a box containing the Secrets of Immortality; the box is opened by the solution to an NP problem inscribed on it.” Is the optimal solution really to not even try the problem—because then you’re trying “brute-forcing an NP-hard problem”! - even if it turns out to be one of the majority of easily-solved problems? “You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses.”
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
The right answer is simply to point out that the worst case of the optimal algorithm is going to be the worst case of all possible problems presented, and this is exactly what we would expect since there is no magic fairy dust which will collapse all problems to constant-time solutions.
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that’s the way it is, well, that’s the way it is.
This is an argument of the form “Suppose X were true—then X would be true! So couldn’t X be true?”
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time—freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point. Of course, there is the whole uncomputability zoo above all that, but computing the uncomputable is even more of a wild goose chase. “Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin’s Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer”.
No, this is pointing out that if you provide an optimal outcome barricaded by a particular obstacle, then that optimal outcome will trivially be at least as hard as that obstacle.
This is exactly the point made for computable approximations to AIXI. Thank you for agreeing.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
Fantastic! I claim my extra 43 years of life.
No, carelessness on my part. Doesn’t affect my original point, that schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI. To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
43 years is a poor sort of immortality.
Well, yeah. Again—why would you expect anything else? Given that there exist problems which require that or worse for solution? How can a universal problem solver do any better?
Yes.
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
It’s more than I had before.
The spin-off argument. Here’s a huge compendium of spinoffs of previous approaches to AGI. All very useful, but not AGI. I’m not expecting better from AIXI.
Hm, so let’s see; you started off mocking the impossibility and infeasibility of AIXI and any computable version:
Then you admitted that actually every working solution can be seen as a form of SI/AIXI:
And now you’re down to arguing that it’ll be “very useful, but not AGI”.
Well, I guess I can settle for that.
I stand by the first quote. Every working solution can in a useless sense be seen as a form of SI/AIXI. The sense that a hot-air balloon can be seen as an approach to landing on the Moon.
At the very most. Whether AIXI-like algorithms get into the next edition of Russell and Norvig, having proved of practical value, well, history will decide that, and I’m not interested in predicting it. I will predict that it won’t prove to be a viable approach to AGI.
How can a hot air balloon even in theory be seen as that? Hot air has a specific limit, does it not—where its density equals the outside density?
Isn’t there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.
I can’t imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?
Yes, the question is what that middle ground looks like—how you actually come up with new models. Gelman and Shalizi say it’s a non-Bayesian process depending on human judgement. The behaviour that you rightly say is absurd, of the Bayesian Flying Dutchman, is indeed Shalizi’s reductio ad absurdum of universal Bayesianism. I’m not sure what gwern has just been arguing, but it looks like doing whatever gets results through the week while going to the church of Solomonoff on Sundays.
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
Eh. What seems AGI-ish to me is making models interact fruitfully across domains; algorithmic models to find new hypotheses for a particular set of data are not that tough and already exist (and are ‘better than people’ in the sense that they require far less computational effort and are far more precise at distinguishing between models).
Yes, I had in mind a universal algorithmic method, rather than a niche application.
The hypothesis-discovery methods are universal; you just need to feed them data. My view is that the hard part is picking what data to feed them, and what to do with the models they discover.
Edit: I should specify, the models discovered grow in complexity based on the data provided, and so it’s very difficult to go meta (i.e. run hypothesis discovery on the hypotheses you’ve discovered), because the amount of data you need grows very rapidly.
Hmmm. Are we going to see a Nobel awarded to an AI any time soon?
I don’t think any robot scientists would be eligible for Nobel prizes; Nobel’s will specifies persons. We’ve had robot scientists for almost a decade now, but they tend to excel in routine and easily automatized areas. I don’t think they will make Nobel-level contributions anytime soon, and by the time they do, the intelligence explosion will be underway.
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses? I mean, there’s probably better approximations than that; but we have strong evidence that this one works and is computable.
Whatever approach to AGI anyone has, let them go ahead and try it, and see if it works. Ok, that would be rash advice if I thought it would work (because of UFAI), but if it has any chance of working, the only way to find out is to try it.
I’m not saying I’m willing to code that up; I’m just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we’ve got so far.
If you have a few billion years to wait.
No reason that can’t be sped up.
Eh. I like the approach of “begin with a simple system hypothesis, and when your residuals aren’t distributed the way you want them to be, construct a more complicated hypothesis based on where the simple hypothesis failed.” It’s tractable (this is the elevator-talk version of one of the techniques my lab uses for modeling manufacturing systems), and seems like a decent approximation of Solomonoff induction on the space of system models.
It’s basically different terminology. His point is valid.
A model isn’t something you assign probability to. It’s something you use to come up with a set of prior probabilities. The model he used assumed independence. It didn’t actually assign zero probability to any result. It doesn’t assign a probability, zero or otherwise, to the machine being broken, because that’s not something that’s considered. It also doesn’t assign a probability to whether or not its raining.
From the little Moldbug I’ve been able to slog through, my main impression of him is “reader-hostile”. If he were polite maybe he would get to the effing point already.
I think his point is that you are still entirely unable to even enumerate, let alone process, all the relevant hypotheses, nor does the formula inform you of those, nor does it inform you how to deal with cyclic updates (or even that those are a complicated case), etc.
It’s particularly bad when it comes to what rationalists describe as “expected utility calculations”. The ideal expected utility is a sum of the differential effect of the actions being compared, over all hypotheses, multiplied with their probabilities… a single component of the sum provides very little or no information about the value of the sum, especially when picked by someone with a financial interest as strong as “if i don’t convince those people I can’t pay my rent”. Then, the actions themselves have an impact on the future decision making, which makes the expected value sum grow and branch out like some crazy googol-headed fractal hydra. Mostly when someone’s talking much about Bayes they have some simple and invalid expected value calculation that they want you to perform and act upon, so that you’ll be worse off in the end and they’ll be better off in the end.
And yet he wants a pragmatically motivated society.
A man can dream can’t he? Note he isn’t advocating nonsense as an organizing tool, much of his wackier thought is precisely around trying to make an organizing tool work as good as nonsense does. Unfortunately I don’t think he has succeed since in my opinion neocameralism is unlikely to be implemented and likely to blow up if someone did implement it.
I agree, except that some of my own wacky thought (well, it’s hardly original, of course) basically says that nonsense isn’t a “bad” at all—not for anyone whom we might reasonably call human. For example, as has been pointed out here, people have in-built hypocritical mechanisms to cope with various kinds of “faith”, but if you truly consider that you’re doing something “rational” and commonsensically correct, you’re left driving at an enormous speed without brakes, and the likely damage might be great enough that no-one should ever aspire to “rational” thinking.
Also:
Orwell’s diary, 20th March, 1941
Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug’s diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don’t have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.
From the Greg Cochran link:
It’s worth pointing out that at least part of the opposition to government-run eugenics programs is rational distrust that the government will not corrupt the process. If a country started a program of tax breaks for high-IQ people having children, and perhaps higher taxes for low-IQ people having children, a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is]. It’s a similar rationale to the one against literacy tests for voting: sure, maybe illiterate people can’t be informed voters, but trusting the government to decide who’s too illiterate to vote leads to perverse incentives.
Absolutely. It would start with: “Everyone (accepted as an expert by our party) agrees that the classical IQ tests developed by psychometric methods are too simple and they don’t cover the whole spectrum of human intelligence. Luckily, here is a new improved test developed by our best experts that includes the less mathematical aspects of intelligence, such as having a correct attitude towards insert political topic. Recognizing the superiority of this test to the classical tests already gives you five points!”
Also, governments are notoriously bad at making broad and costly social policies that will only give a return on investment “in a few centuries or less”. We’re not talking just beyond the next elections, the party, the politicians, even the whole state may not even exist by then.
“Will”.
That seems a little bit simplistic. How many problems have been caused by smart people attempting to implement plans which seem theoretically sound, but fail catastrophically in practice? The not-so-smart people are not inclined to come up with such plans in the first place. In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
While people may not agree with me on this, I find the theory of “rational addiction” within contemporary economics to carry many of the hallmarks of this way of thinking. It is mathematically justified using impressively complex models and selective post-hoc definitions of terms and makes a number of empirically unfalsifiable claims. You would have to be fairly intelligent to be persuaded by the mathematical models in the first place, but that doesn’t make it right.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational. The problems caused by the former are lesser in scale.
The theory of “rational addiction” seems like an example that for any (consistent) behavior you can find such utility function that this behavior maximizes it. But it does not mean that this is really a human utility function.
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
True. I guess it’s just that the consequences of such actions can often lead to a large amount of negative utility according to my own utility function, which I like to think of as more universalist than egoist. But people who are selfish, rational and intelligent can, of course, cause severe problems (according to the utility functions of others at least). This, I gather, is fairly well understood. That’s probably why those characteristics describe the greater proportion of Hollywood villains.
Hollywood villains are gifted people who pathologically neglect their self-deception. With enough self-deception, everyone can be a hero of their own story. I would guess most authors of fashionable nonsense kind of believe what they say. This is why opposing them would be too complicated for a Hollywood script.
Yes! I’m glad that someone is with me on this.
And yet, it’s the “Universalist” system that allows Jews to not get exterminated. I think the cognitive and epistemological flaws of “Universalism” kinda makes some people ignore the fact that it’s the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
Was (non-Universalist) Nazi Germany more open to accepting Jew-produced science than the “Universalist” West was? Or is the current non-Universalist Arab world more open to such? Were the previous feudal systems better at accepting atheists or Jewish people? Which non-universalist (and non-Jewish) system was actually better than “Universalism” at recognizing Jewish contributions or intelligence, that you would choose to criticize Universalism for being otherwise? Or better at not killing heretics?
Let’s keep it simple—which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
As for Moldbug’s diagnosis, I’m unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I’m not sure Moldbuggery has much predictive capacity.
I agree. In my mind this is its great redeeming feature and the main reason I think I still endorse universalism despite entertaining much of the criticism of it. At the end of the day I still want to live in a Western Social Democracy, just maybe one that has a libertarian (and I know this may sound odd coming from me) real multicultural bent with regards to some issues.
The same is true of the Roman and Byzantine empire. The Caliphate too. Also true of Communist regimes. Many absolute monarchies now that I think about it. Also I’m pretty sure the traditional Indian cast system could keep Jews safe as well.
If Amy Chua is right democracy (a holy word of universalism) may in the long run put market dominant minorities like the Jews more at risk than some alternatives. Introducing democracy and other universalist memes in the Middle East has likely doomed the Christian minorities there for example.
I’m not quite sure why particularly the Jewish people matter so very much to you in this example. I’m sure you aren’t searching for the trivial answer (which would be “in any ancient and medieval Jewish state or nation”).
If you are using Jews here as an emblem of invoking the horrors of Nazism, can’t we at least throw a bone to Gypsy and Polish victims? And since we did that can we now judge Communism by the same standard? Moldbug would say that Communism is just a country getting sick with a particularly bad case of universalism.
The thing is Universalism as it exists now dosen’t seem to be stable, the reason one sees all these clever (and I mean clever in the bad, overly complicating, overly contrarian sense of the word) arguing in the late 2000s against “universalism” online is because the comfortable heretic tolerating universalism of the second half of the 20th century seems to be slowly changing into something else. They have no where else to go but online. The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking. As technology, that enables the surveillance of citizens and enforcement of social norms by peers, advances there dosen’t seem to be any force really counteracting it. If you transgress, if you are a heretic in the 21st century, you will remain one for your entire life as your name is one google search away from your sin. As mobs organize via social media or apps become more and more a reality, a political reality, how long will such people remain physically safe? How do you explain to the people beating you that you recanted your heresy years ago? Recall how pogroms where usually the affair of angry low class peasants. You don’t need the Stasi to eliminate people. The mob can work as well. You don’t need a concentration camp when you have the machete. And while modern tech makes the state more powerful since surveillance is easier, it also makes the mob more powerful. Remaining under the, not just legal, but de facto, protection of the state becomes more and more vital. The room for dissent thus shrinks even if stated ideals and norms remain as they where before.
And I don’t think they will remain such. While most people carrying universalist memes are wildly optimistic with “information wants to be free” liberty enhancing aspect of it, the fact remains that this new technology seems to have also massively increased the viability and reach of Anarcho-Tyranny.
The personal psychological costs of living up to universalist ideals and internalizing them seem to be rising as well. To illustrate what I mean by this, consider the practical sexual ethics of say Elizabethan England and Victorian England. On the surface and in their stated norms they don’t differ much, yet the latter arguably uses up far more resources and a places a greater cognitive burden of socialization on its members to enforce them.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren’t that different in stated ideals, but the practical costs have arguably risen.
nykos’ was the one who used the example of Jewish superior intelligence not getting acknowdged as such by Universalism. My point was that was there have been hardly any non-Universalist systems that could even tolerate Jewish equal participation, let alone acknowledged Ashkenazi superiority.
Thank you, I missed that context. Sorry.
I see no proof of that. What economic benefits and comforts? Sure, real wages in Western countries have stopped growing around the 1970s, but e.g. where welfare programs are being cut following the current crisis, it’s certainly not the liberals but economically conservative governments championing the cuts.
I don’t understand. Do you mean prestigious norms like “never avoid poor neighbourhoods for your personal safety, because it’s supposedly un-egalitarian”, or what? What other norms like that exist that are harmful in daily life?
What’s happening is, to paraphrase Thacher, that governments are running out of other people’s money. Yes, conservative parties are more willing to acknowledge this fact, but liberal parties don’t have any viable alternatives and it was their economic policies that lead to this state of affairs.
Hmm? And in places where fiscally conservative parties were at the helm before the crisis? What about them?
The places that are being hardest hit have been ruled by left wing parties for most of the time since at least the 1970s. Also in these places the right wing parties aren’t all that right wing.
Are the Scandinavian nations among the ones hit hardest? Or, say, Poland?
You’ve got to make it more general, that’s where it gets interesting! Speaking frankly, from the selfish viewpoint of a typical Western person, the Universalist system has been better than any other system at everything for more than a century, especially at the quality and complexity of life for the average citizen. Of course, Moldbug’s adherents would argue that there’s no dependency between these two unique, never-before-seen facts of civilization—universalist ideology and an explosive growth in human development for the bottom 90% of society. They’d say that both are symptoms of more rapid and thoroughly supported technological progress than elsewhere.
Let’s concede that (although there are reasons to challenge it—see e.g. Weber’s The Protestant Ethic and the Spirit of Capitalism, an early argument that religion morphing into a secular quasi-theocracy is what gave the West its edge). Okay, so if both things are the results of our civilization’s unique historical path… then, from an utilitarian POV, the cost of universalism is still easily worth paying! We know of no society that advanced to an industrial and then post-industrial state without universailsm, so it would be in practice impossible to alter any feature of technical and social change to exclude the dominance of universalist ideology but keep the pace of “good” progress. Then, even assuming that universalist ideology is single-handedly responsible for the entirety of the 20th century’s wars and mass murder (and other evils), it is still preferable to the accumulated daily misery of the traditional pre-industrial civilization—especially so for everyone who voted “Torture” on “Torture vs Specks”! (I didn’t, but here I feel differently, because it’s “Horrible torture and murder” vs “Lots and lots of average torture”.)
Moldbug isn’t arguing we should get rid of some technology and its comfort in order to also get rid of universalism and he certainly does recognize both as major aspects of modernity, no he is saying that precisely technological progress now enables us to get rid of the parasitic aspect of modernity “universalism”. One can make a case that since it inflames some biases, it is slowing down technological progress and the benefits it brings. Peter Thiel is arguably concerned precisely by this influence when he talks of a technological slowdown. Universalism not only carries opportunity costs, it has historically often broken out in Luddite strains. Consider for example something like the FDA. Recall what criticism of that institution are often heard on LW, yet aren’t these same criticism when consistently applied basically hostile to the Cathedral?
Whether MM is right or wrong what you present seems like a bit of a false dilemma. You certainly are right that we haven’t seen societies that advance to a post-industrial or industrial state without at least some influence of universalism but it is hard to deny that we do observe varying degrees of such penetration. Moldbug’s idea is that even if we can’t use technology to get rid of the memeplex in question by social manoeuvring we can still perhaps find a better trade off by not taking “universalism” so seriously. The vast majority of people, the 90% you invoke, may be significantly better of with a world where every city is Singapore than a world where every city is London.
It is no mystery which of these two is more in line with universalist ideals.
And could you please name those ideals once again? Because it’s very confusing.
In the case of Singapore vs. London (implicitly including the governing structure of Britain since London isn’t a city state)? A few I can think of straight away:
Democratic decision making. Therapeutic rather than punitive law enforcement. Lenient punishment of crime. Absence of censorship.
Naturally all of these aren’t fully realized in London either. Britian dosen’t have real free speech, yet it has much more of it than Singapore. Britain has (in my opinion) silly and draconian anti-drug laws, but it dosen’t execute people for smuggling drugs. London doesn’t have corporal or capital punishment. The parties in Britain are mostly the same class of people, yet at least Cerberus (Lib/Lab/Con) has three heads, you get to vote the one that promises to gnaw at you the least, Singapore is democratic in form only, and it is a very transparent cover. Only one party has a chance of victory, and it has been that way and will remain that way for some time.
Yet despite all these infractions against stated Western ideals, life isn’t measurably massively worse in Singapore than in London. And Singapore seems to work better as a multi-ethnic society than London. The world is globalizing, de facto multiculturalism is the destined future of every city from Vladivostok to Santiago so the Davos men tell us. No place like Norway or Japan in our future, but elections where we will see ethnic blocks and identity politics. I don’t know about you but I prefer Lee Kuan Yew to that mess of tribal politics. Which city would deal better with a riot? Actually which city is more likely to have a riot? Recall what Lee said in his autobiography and interviews he learned from the 1960s riots. Did it work? It sure looks like it did. Also recall from what Singapore started, and where surrounding Malaysia from which it diverged is today. What is the better model to pull the global south out of poverty? What is the better model to have the worlds people live side by side? Which place will likely be safer, more liveable and more prosperous in 20 years time?
It seems in my eyes that Singapore is clearly winning in such a comparison. Yet clearly it does so precisely by ignoring several universalist ideals. Strangely they didn’t seem to have needed to give up iPods and other marvels of modern technology to do it either.
Taboo “worse”!
If by life not being “worse” you mean the annual income or the quality of healthcare or the amount of street crime, maybe it’s so. If one values e.g. being able to contribute to a news website without fear of fines or imprisonment (see e.g. Gibson’s famous essay where he mentions that releasing information about Singapore’s GDP could be punished with death), or not fearing for the life of a friend whom you smoke marijuana with, or being able to think that the government is at least a little bit afraid of you (this not necessarily being real, just a pleasant delusion to entertain, like so many others we can’t live without)… in short, if one values the less concrete and material things that speak to our more complex instincts, it’s not nearly so one-sided.
That’s why I dislike utilitarianism; it says without qualification that a life always weighs the same, whatever psychological climate it is lived in (the differences are obvious as soon as you step off a plane, I think—see Gibson’s essay again), and a death always weights the same, whether you’re killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore), et cetera, et cetera… It’s, in the end, not very compatible with the things that liberals OR classical conservatives love and hate. Mere safety and prosperity are not the only things a society can strive for.
Yes. But these are incredibly important things to hundreds of millions of people alive today drowning in violence, disease and famine. What do spoiled first world preferences count against such multitudes?
And you know what, I think 70% of people alive today in the West wouldn’t in practice much miss a single thing you mention, though they might currently say or think they would.
There’s a threshold where violence, disease and hunger stop being disastrous in our opinion (compare e.g. post-Soviet Eastern Europe to Africa), and that threshold, as we can see, doesn’t require brutal authoritarianism to maintain, or even to achieve. Poland has transitioned to a liberal democracy directly after the USSR fell, although its economy was in shambles (and it had little experience of liberalism and democracy before WW2), Turkey’s leadership became softer after Ataturk achieved his primary goals of modernization, etc, etc. There’s a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the latter is an acceptable cost for attempting liberal policies to me. I accept that the former might require harsh measures to overcome, but I’d rather see those measures taken by an internally liberal colonial power (like the British Empire) than a local regime.
The actual real people living there, suppose you could ask them, which do you think they would chose? And don’t forget those are mere stated preferences, not revealed ones.
If you planted Singapore on their borders wouldn’t they try to move there?
Sure, Singapore is much better than Africa; I never said otherwise! However, if given choice, the more intelligent Africans would probably be more attracted to a Western country, where their less tangible needs (like the need for warm fuzzies) would also be fulfilled. Not many Singaporeans probably would, but that’s because the Singaporean society does at least as much brainwashing as the Western one!
I don’t understand why you think “warm fuzzies” are in greater supply in London than in Singapore. They are both nice places to live, or can be, even in their intangibles. London-brainwashing is one way to inoculate yourself against Singapore-brainwashing, but perhaps there is another way?
Have you been to Singapore for any amount of time? I haven’t (my dad had, for a day or so, when he worked on a Soviet science vessel), but I trust Gibson and can sympathize with his viewpoint. At the very least I observe that it does NOT export culture or spread memes. These are not the signs of a vibrant and sophisticated community!
What could you mean by this that isn’t trivially false?
I haven’t read the Gibson article (but I will). I know that “disneyland” and “the death penalty” are both institutions that are despised by a certain cohort, but they are not universally despised and their admirers are not all warmfuzzophobic psychos. Artist-and-writer types don’t flock to Singapore, but they don’t flock to Peoria Illinois either do they?
Downvoted without hesitation.
If you have the unvoiced belief that cultural products (especially high-quality ones) and memes are created by some specific breed of “artist-and-writer types” (wearing scarves and being smug all the time, no doubt!), then I’d recommend purging it, seeing as it suggests a really narrow view of the world. A country can have a thriving culture not because artistic people “flock” there, but because they are born there, given an appropriate education and allowed to interact with their own roots and community!
By your logic, “artist-and-writer types” shouldn’t just not flock to, but actively flee the USSR/post-Soviet Russia. And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
Notice how this more or less refutes the argument you tried to make in the grandparent.
I’m not making the argument that liberal democracy directly correlates to increasing the cultural value produced. Why else would I defend Iran in that particular regard? No, no, the object of my scorn is technocracy (at least, human technocracy) and I’m even willing to tolerate some barbarism rather than have it spread over the world.
What definition of technocracy are you using that excludes the USSR and India before its economic liberalization?
You seem to have read some hostility towards artists and writers into my comment, probably because of “types” and “flock”? These are just writing tics, I intended nothing pejorative.
I hold no such belief, and I’m glad you don’t either. I only want to emphasize my opinion that Singapore does have a thriving culture, even if it does not have a thriving literary or film industry. But since you admit you don’t know a lot about it I’m curious why you have so much scorn for the place? A city can have something to recommend itself even if it hasn’t produced a good author or a good movie.
In short, well, yeah, I hold more “formal” and “portable” culture such as literature, music or video to have relatively more value than the other types of “culture”, such as local customs and crafting practices and such—which I assume you meant by “thriving culture” here. All are necessary for harmonious development, but I’d say that e.g. a colorful basket-weaving tradition in one neighborhood which is experienced/participated in by the locals is not quite as important and good to have as an insightful and entertaining story, or a beautiful melody—the latter can still have relevance continents or centuries apart.
Some African tribe can also have a thriving culture like that, but others can’t experience it without being born there, it can be unsustainable in the face of technical progress, it can interfere with other things important for overall quality of life (trusting a shaman about medicine can be bad for your health), etc. Overall, you probably get what I’m talking about.
Sure, that’s biased and un-PC in a way, but that’s the way that I see the world.
(I don’t have any scorn for Singapore as a nation and a culture, I just don’t care much for a model of society imposed upon it by the national elites in the 20th century that, unlike broadly similar societies in e.g. Japan or even China, doesn’t seem to produce those things I value. Even if its GDP per capita is now 50% or so higher than somewhere else. Heck, even Iran—a theocracy that’s not well-off and behaves rather irrationally—has been producing acclaimed literature and films, despite censorship.)
It seems to me that if you are talking about artistic achievements that have stood the test of centuries, then you are talking almost exclusively about the west, which I agree is utterly dominant in cultural exports. What I have in mind when I say “Singapore culture is thriving” is that it’s a city filled with lovely people going about their business. You could appreciate Singapore culture because you find muslim businessmen or guest worker IT types agreeable—maybe you like their jokes. You could hate Singapore culture if you instead found muslim businessmen to be vacant and awful. But couldn’t we allow that the intelligent african that kicked the discussion off might have either taste? Then we should find out what his tastes are before recommending that he choose London over Singapore.
I read “Disneyland with the death penalty.” Gibson’s not a very good travel-writer, there’s hardly any indication in the article that he spoke to anyone while he was there.
You’re not being fair. Singaporeans would have surely produced something to your tastes, if there were a billion of them and their country were two thousand years old.
I would like seeing comments on Gibson’s article from Singaporeans, including ex-pat Singaporeans.
Konkvistador’s point is that third world countries attempting to imitate western countries haven’t had much success.
When Turkey was modernizing it sure as heck was looking towards Europe for examples, it just didn’t implement democratic mechanisms straight away and restricted religious freedom. And if you look at Taiwan, Japan, Ghana, etc… sure, they might be ruled by oligarchic clans in practice, but other than that [1] they have much more similarities than differences with today’s Western countries! Of course a straight-up copy-paste of institutions and such is bound to fail, but a transition with those institutions, etc in mind as the preferred end state seems to work.
[1] Of course, Western countries are ruled by what began as oligarchic clans too, but they got advanced enough that there’s a difference. And, for good or ill, they are meritocratic.
I’m not familiar with Ghana, but both Japan and Taiwan had effectively one-party systems while modernizing.
I don’t care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.
edit: derp
Could you explain the meaning of this sentence please. I’m not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk’s modernizing reforms. I assume that’s not the case, right?
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on. At least, that’s what Wikipedia tells me. He didn’t purge political opponents except for one incident where they were about to assassinate him, he maintained a Western facade over his political maneuvering (taking pages from European liberal nationalism of the previous century), etc, etc.
To extent that this is a discussion of quality of life and attractiveness of a country, as opposed to what is strictly speaking necessary for development, it’s worth remembering the Armenian genocide.
There’s no evidence that Ataturk was more complicit in that than, say, many respected public servants in 50s-60s Germany were complicit in the Holocaust. Nations just go insane sometimes, and taboos break down, and all that. It takes a hero to resist.
I feel pretty confident that Niall Ferguson, in his The War of the World, claims that Ataturk directly oversaw at least one massacre; I don’t have my copy on hand, however. Also, the Armenian National Institute claims that Ataturk was “the consummator of the Armenian Genocide.”
Also, Israel Charney (the founder of the International Association of Genocide Scholars) says:
Really Ataturk was less harsh than Industrial Revolution-age Britain? I find this highly unlikely (unless your taking about their colonial practices in which case the Armenian genocide is relevant). I think the reason you’re overestimating the relative harshness of Britain is that Britain had more freedom of speech than other industrializing nations and thus its harshness (such as it was) is better documented.
http://en.wikipedia.org/wiki/Enclosure
http://en.wikipedia.org/wiki/Riot_Act
http://en.wikipedia.org/wiki/Peterloo_Massacre
http://en.wikipedia.org/wiki/Great_Famine_%28Ireland%29
http://en.wikipedia.org/wiki/Industrial_Revolution#Child_labour
http://en.wikipedia.org/wiki/Opposition_to_the_Poor_Law
http://www.victorianweb.org/history/workers1.html
http://www.victorianweb.org/history/workers2.html
(That’s just after a fifteen-minute search. By the way, haven’t you read Dickens? He gives quite a vivid contemporary account of social relations, although dramatized.)
Are you claiming that similar and worse things didn’t happen in Turkey?
Let me get this straight: you’re trying to argue that Britain was harsh because some people expressed opposition to a law you like?
Yes, that’s want I meant by Britain’s harshness (such as it was) being better documented thanks to its freedom of speech.
With the exception of the Armenian genocide (which is comparable in vileness to many things, including the actions of that wonder of private enterprise, the East India Company) - yes. Not during the late 19th and 20th century, I mean. Turkish landlords might’ve been feudals, but they didn’t outright steal the entirety of their tenants’ livelihood from under them.
The other way around! Many respected people hated and denounced it so much, it famously prompted Dickens to write Oliver Twist.
“The blogosphere overflows with Google Pundits; those who pooh-pooh, with a few search queries, an argument that runs counter to their own ideological assumptions, usually regarding a subject with which they possess only a passing familiarity.” It always gets my goat when the other guy does it.
I knew perfectly well about all of those except the Great Famine before searching, thank you very much! (I used to think there was only one Irish famine.) That’s why I felt confident in saying that 20th century Turkey was not as bad! “Fifteen-minute search” referred to a search for articles to show in support of my argument, not an emergency acquisition of knowledge for myself.
Taboo ‘ruthlessness’. For example Japan was certainly ruthless while modernizing by any reasonable definition.
It didn’t fully come into the “Universalist” sphere, ideologically and culturally, until its defeat in WW2, and the most aggressive and violent of its actions were committed in a struggle for expansion against Western dominiance.
Konkvistador’s argument would be that it wouldn’t of been able to modernize nearly as effectively if it had come into the “Universalist” sphere before industrializing.
Maybe, I don’t know. On the other hand, maybe it would’ve avoided conquest and genocide if it had come into that sphere before industrializing.
Or maybe my premise above is wrong and its opening in the Meiji era did in fact count as contact with “Universalism”—note that America and Britain’s influence had been considerable there, and Moldbug certainly says that post-Civil War U.S. and post-Chartist Britain (well, he says post-1689, but the Chartist movement definitely was a victory for democracy[1]) were dominated by hardcore Protestant “Universalism”.
1- Although its effects were delayed by some 20 years.
You seem to have an overly romantic view of criminals if you think they never kill with malice.
Heck when the government doesn’t keep them in check criminal gangs operate like mini-governments that are much worse in terms of warm fuzzies then even Singapore.
In the West they operate more or less like wild animals.
Um no.
Actually Modlbug’s diagnosis does provide decent predictive power: In the West at least Whigh history shall continue. The left shall continue to win nearly all battles over what the stated values and norms of our society should be (at least outside the economic realm).
Naturally Whig history makes the same prediction of itself, but the model it uses to explain itself seems built more for a moral universe than the one we inhabit. Not only that I find the stated narrative of Whig history has some rather glaring flaws. MM’s theories win in my mind simply because they seem a explanation of comparable or lower complexity in which I so far haven’t found comparably problematic flaws.
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
The prediction Moldbug made was “no civil war in Syria”; not that there would be a civil war but Assad would manage to endure it.
Indeed in the post I link to, Mencius Moldbug seemed to be predicting that Qaddafi would endure the civil war too; as Moldbug made said post at a point in time in which the war was turning to Qaddafi’s favour, and Moldbug wrongly predicted that the West would not intervene to perform airstrikes.
So what exactly did he predict correctly?
Not proven. It seems to me that people wildly overdo even the prejudices they have evidence for, so we don’t know how much is lost due to excessive prejudice compared to how much is lost due to insufficient prejudice.
My impression is that we aren’t terribly good yet at understanding how traits which involve many genes play out, whether political correctness is involved or not.
Very true. I think most HBD proponents are somewhat overconfident of their conclusions (though most of them seem more likely than not). But what I think he was getting at is that we would have great difficulty acknowledging if it was so and that any scientist that wanted to study this is in a very rough spot.
Unlike say promotion of the concept of human caused climate change which has the support of at least the educated classes, it may be impossible for our society to assimilate such information. It seems more likely that they would rather discredit genetics as a whole or perhaps psychometry or claim the scientists are faking this information because of nefarious motives. This suggest there exists a set of scientific knowledge that our society is unwilling or incapable of assimilating and using in a manner one would expect from a sane civilization.
We don’t know what we don’t know, we do know we simply refuse to know some things. How strong might our refusal be for some elements of the set? What if we end up killing our civilization because of such a failure? Or just waste lives?
I don’t know if you could get away with studying the sort of thing you’re describing if you framed it as “people who are good at IQ tests” or “people who have notable achievements”, rather than aiming directly at ethnic/racial differences. After all, the genes and environment are expressed in individuals.
It’s conceivable but unlikely that the human race is at risk because that one question isn’t addressed.
I think I didn’t do a good job of writing the previous post. I was trying to say that regardless what the truth is on that one question (and I am uncertain on it, more so than a few months ago), it demonstrates there are questions we as a society can’t deal with.
I wasn’t saying that not understanding the genetic basis of intelligence is a civilization killer (I didn’t mention species extinction, though that is possible as well), which in itself is plausible if various people warning about dysgenics are correct, but that future such questions may be.
I argued that since reality is entangled and our ideology has no consistent relationship with reality we will keep hitting on more and more questions of this kind (ones that our society can’t assimilate) and that knowing the answer to some such questions may turn out to be important for future survival.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology or challenging our interpretation of history. It would be directly relevant to FAI yet it would make some predictions that people will refuse to believe because of tribal affiliation or because it is emotionally too straining.
Sorry—species extinction was my hallucination.
Dysgenics is an interesting question—what do we need to be adapting to?
I think this statement is too strong. Our ideology doesn’t have a 100% consistent relationship with reality, true, but that’s not the same as 0%.
What, sort of like Hari Seldon’s psychohistory ? Regardless of whether our society can absorb it or not, is such a thing even possible ? It may well be that group behavior is ultimately so chaotic that predicting it with that level of fidelity will always be computationally prohibitive (unless someone builds an Oracle AI, that is). I’m not claiming that this is the case (since I’m not a sociologist), but I do think you’re setting the bar rather high.
That hasn’t stopped us from doing incredible feats of artificial selection using phenotype alone. You can work faster and better the more you understand a system on the genetic level, but it’s hardly necessary.
I agree and have for some time, I didn’t mean to imply otherwise. Especially this is I think terribly important:
But currently there is nothing remotely approaching an actionable political plan, so I advocated doing what little good one can despite Cryptocalvinism’s iron grasp on the minds of a large fraction of mankind. As Moldbug says Universalism has no consistent relation to reality. A truly horrifying description of reality if it is accurate, since existential risk reduction eventually will become entangled with some ideologically charged issue or taboo.
I wish I could be hopeful but my best estimate is that humanity is facing a no win scenario here.
Another thing I’d like to ask you! What are those bad things in your estimate? Or, rather, what areas are we talking about? Are you mainly concerned with censorship, academic dishonesty, bad prediction-making, other theory-related flaws? Or do you find some concrete policy really awful for those epistemic reasons, like state welfare programs, ideological pressure on illiberal regimes or immigration from poor countries? (I chose those examples because I’m in favor of all three, with caveats.)
I know you’re against universal suffrage, but that’s more or less meta-level; is there something you really loathe that directly concerns daily life, its quality, comfort and freedoms? Of course, I know about the policy preferences Mencius himself draws from his doctrine, but his beliefs are… idiosyncrasic: e.g. I don’t think you’d agree with him that selling oneself and one’s future children into slavery should be at all acceptable or tolerated.
That’s more than I’ve managed to get from my reading of him. I get no picture from his writings about what he wants life to be like—“daily life, its quality, comfort and freedoms ”—under his preferred regime, only about what he doesn’t want life to be like under the current regimes.
True, it’s in bits and pieces; but see e.g. the Patchwork series and try some other posts at random.
Basically, a good example of his preferences is the “total power, no influence or propaganda” model of Patchwork; in his own words, the Sovereign’s government wouldn’t censor dissenters because it has nothing to fear from them. Sure, I strongly doubt it would work that way, even with a perfectly rational sovereign (the blog post linked to above provides some decent criticism of that from an anarchist POV). But we nonetheless can conclude that MM would like a comfortable, rich society with liberal mores (although he does all the conservative elderly grumbling about the supposed irresponsibility and flighty behavior of Westerners today [1]) where he wouldn’t ever have to worry about tribal power games or such—enforced with an iron fist, for selfish reasons of productivity and public image, and totally un-hypocritical about that.
He’s okay with some redistribution of wealth (the sovereign giving money to private charities it finds worthy, which, being driven mainly by altruism, automatically care for everyone better than a disinterested bureaucracy—again, I’m a little skeptical).
Another thing he likes to say is that the capacity for violence within society should be supremely concentrated and overwhelming, and then the rational government supposedly wouldn’t have to actually use it.
And then there are the totally contrarian things like his tolerance for indentured servitude on ideological grounds (look up his posts on “pronomianism”), which, along with his less disagreeable opinions, could well stem from his non-neurotypical (I take Konkvistador’s word, and my impressions) wiring.
[1] When he repeats some trite age-old bullshit about “declining personal morality”—while cheering for no-holds-barred ruthless utilitarianism—that’s when I tolerate him least.
There’s an important question here: WHY do you think people dislike that so much that they’re willing to subvert entire fields of knowledge to censor those inquiries? Please ponder that carefully and answer without any mind-killed screeds, ok?
(I’m not accusing you in advance, it’s just that I’ve read about enough such hostile denunciations from the “Internet right” who literally say that “Universalists/The Left/whoever” simply Hate Truth and like to screw with decent society. Oh, and the “Men’s Rights” crowd often suggests that those who fear inequality like that just exhibit pathetic weak woman-like thinking that mirrors their despicable lack of masculinity in other areas. And Cthulhu help you if you are actually a woman who thinks like that! Damn, I can’t stand those dickheads.)
Of course, I’d like others here to also provide their perspective on probable reasons for such behavior! Don’t pull any punches; if it just overwhelmingly looks like people with my beliefs are underdeveloped mentally and somewhat insane, I’ll swallow that—but avoid pettiness, please.
After reading that sentence, I expected some rather radical eugenics advocacy. Then I followed that link and saw that all those suggestions (except maybe for cloning, but we can hardly know about that in advance) are really “nice” and inoffensive. Seriously, I think that if even I, who’s pretty damn orthodox and brainwashed—a dyed-in-the-wool leftist, as it is—haven’t felt a twinge—than you must be overestimating how superstitious and barbaric an educated Universalist is in regards to that problem.