Hmm, on second thought, I added a [/parody] tag at the end of my post—just in case...
ZoltanBerrigomo
There is no such thing as strength: a parody
I think the lumping of various disciplines into “science” is unhelpful in this context. It is reasonable to trust the results of the last round of experiments at the LHC far more than the occasional psychology paper that makes the news.
I’ve not seen this distinction made as starkly as I think it really needs to be made—there is a lot of difference between physics and chemistry, where one can usually design experiments to test hypotheses; to geology and atmospheric science, where one mostly fits models to data that happens to be available; to psychology, where the results of experiments seem to be very inconsistent and publication bias is a major cause of false research results.
I’m not sure I understand your criticism. I don’t mean this in a passive aggressive sense, I really do not understand it. It seems to me that “the stupid,” so to speak, perfectly carries over between the parody and the “original.”
A. Imagine I visit country X, where everyone seems to be very buff. Gyms are everywhere, the parks are full of people practicing weight-lifting, and I notice people carrying heavy objects with little visible effort. When I return home, I remark to a friend that people in X seem to be very strong.
My friend gives me a glare. “What is strength, anyway? How would you define it? By the way, don’t you know the concept has an ugly history? Also, have you seen this article about the impossibility of a culture-free measure of strength? Furthermore, don’t you know that there is more variation between strong and weak people than among them?”
I listen to this and think to myself that I need to find some new friends.
B. Imagine I visit country X, where almost everyone seems to be of race Y. Being somewhat uneducated, I was unaware of this. When I return home, I ask a friend whether he knew that people from X tend to be of race Y.
My friend gives me a glare. “How do you define race anyway? Don’t you know the concept has an ugly history? You know, it is a fact that there is more variation between races than among them.”
I listen to this and think to myself that I need to find some new friends.
C. Imagine I visit country X, where intellectual pursuits seem highly valued. People play chess on the sidewalks and the coffee shops seem full of people reading the classics. The front pages of news papers are full of announcements of the latest mathematical breakthroughs. Nobel/Abel prize announcements draw the same audience on the television as the Oscars in my own country. Everyone I converse with is extremely well-informed and offers interesting opinions that I had not thought of before.
When I return home, I remark to a friend that people in X seem to be very smart.
My friend gives me a glare. “How would you define intelligence anyway? Don’t you know the concept has an ugly history? Have you seen this article about the impossibility of a universal, culture-free intelligence test?”
I listen to this and...
It seems to me the three situations are exactly analogous. Am I wrong?
Side-stepping the issue of whether rationalists actually “win” or “do not win” in the real world, I think a-priori there are some reasons to suspect that people who exhibit a high degree or rationality will not be among the most successful.
For example: people respond positively to confidence. When you make a sales pitch for your company/research project/whatever, people like to see you that you really believe in the idea. Often, you will win brownie points if you believe in whatever you are trying to sell with nearly evangelical fervor.
One might reply: surely a rational person would understand the value of confidence and fake it as necessary? Answer: yes to the former, no to the latter. Confidence is not so easy to fake; people with genuine beliefs either in their own grandeur or in the greatness of their ideas have a much easier time of it.
Robert Kurzbans’ book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is essentially about this. The book may be thought of as a long-winded answer to the question “Why aren’t we all more rational?” Rationality skills seem kinda useful for bands of hunter-gatherers to possess, and yet evolution gave them to us only in part. Kurzban argues, among other things, that those who are able to genuinely believe certain fictions have an easier time persuading others, and therefore are likely to be more successful.
I would bet the opposite on #4, but that is beside the point. On #4 and #6, the point is that even if everything I wrote was completely correct—e.g., if the scientific journals were actually full of papers to the effect that there is no such thing as a universal test of strength because people from different cultures lift things differently—it would not imply there is no such thing as strength.
On #5, the statement that race is a social construct is implicit. Anyway, as I said in the comment above, there are a million similar statements that are being made in the media all the time, and I could have easily chosen to cite one that would have explicitly said race is a social construct. For example:
The writer is a law professor, writing in the NY times; she tells us that “race is a social construct” as “there is no gene or cluster of genes common to all blacks or all whites” and explicitly draws the conclusion that race “is not real in a genetic sense.”
...which is a synthesis of arguments 1 & 3 & 5 in my post. I know I could read the author’s statement as true but trivial (she is, of course, right—race, strength, height, and all other concepts in our vocabulary are social constructs) but that does not seem to be the intended reading. I could also explicate her position beginning with the words “But what she really meant by that is...” but that also strikes me the wrong response to a fundamentally confused argument.
A very interesting and thought provoking post—I especially like the Q & A format.
I want to quibble with one bit:
How can I tell there aren’t enough people out there, instead of supposing that we haven’t yet figured out how to find and recruit them?
Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.
There is an empirical claim about the world that is implicit in that statement, and it is this claim I want to disagree with. Namely: I think having a high impact on the world is really, really hard. I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck.
It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.
First, only some of the attacks I cited were brief and sketchy; others were lengthier. Second, I have cited a few such attacks due to time and space constraints, but in fact they exist in great profusion. My personal impression is that the popular discourse on intelligence and race is drowning in confused rhetoric along the lines of what I parodied.
Finally, I think the last possibility you cite is on point—there are many, many people who are not thinking very clearly here. As I said, I think these people also have come to dominate the debate on this subject (at least in terms of what one is likely to read about in the newspaper rather than a scientific venue). Instead of ignoring them and focusing on people who make more thoughtful and defensible variations of these points, I think some kind of attempt at refutation is called for.
1+1=2 is true by definition of what 2 means
Russell and Whitehead would beg to differ.
Not sure...I think confidence, sales skills, and ability to believe and get passionate about BS can be very helpful in much of the business world.
is very, very difficult not to give a superintelligence any hints of how the physics of our world work.
I wrote a short update to the post which tries to answer this point.
Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware
I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.
Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?
It can’t. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky.
The orc’s inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information.
If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.
Sometimes you’re dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence.
And how should you (rationally) decide which kind of domain you are in?
Answer: using reason, not emotions.
Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of inductive reasoning.
Sure, you can work towards feeling more strongly about something, but I don’t believe you’ll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.
As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public’s sense of anger, I’m not sure you’d be able to match him.
Do the extend that it does require luck that simply means that it’s important to have more people with rationality + competence + caring. If you have many people some will get lucky.
The “little bit of luck” in my post above was something of an understatement; actually, I’d suggest it requires a lot of luck (among many other things) to successfully change the world.
I think you might be pattern matching to straw-vulcan rationality, that’s distinct from what CFAR wants to teach.
Not sure if I am, but I believe I am making a correct claim about human psychology here.
Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.
This does not mean you don’t have emotions.
You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.
Now if, in the real world, the way you persuade people is by emotional appeals (and this is at least partially true), this will be more difficult the more you get in the habit of rational thinking, even if you have an accurate model about what it takes to persuade someone -- emotions are not easy to fake and humans have strong intuitions about whether someone’s expressed feelings are genuine.
For those people who insist, however, that the only thing that is important is that the theory agrees with experiment, I would like to make an imaginary discussion between a Mayan astronomer and his student...
These are the opening words of a ~1.5 minute monologue in one of Feynman’s lectures; I won’t transcribe the remainder but it can be viewed here.
I’m very fond of this bit by Robin Hanson:
A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions.
I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.
A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.
A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.
A.ii. On the other hand, one can certainly quibble with what various organization are now doing. Such quibbling can even be quite productive.
B. What comes next should be understood as quibbles.
B.i. As many others have pointed out, effective altruism implicitly assumes a set of values. As Daron Acemogulu asks (http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism), “How much more valuable is to save the life of a one-year-old than to send a six-year-old to school?”
B.ii. I think GiveWell may be insufficiently transparent abut such things. For example, its explanation of criteria at http://www.givewell.org/criteria does not give a clearcut explanation of how it makes such determinations.
Caveat: this is onlybased on browsing the GiveWell webpage for 10 minutes. I’m open to being corrected on this point.
B.iii. Along the same lines I wonder: had GiveWell, or other effective altruists, existed in the 1920s, what would they say about funding a bunch of physicists who noticed some weird things were happening with the hydrogen atom? How does “develop quantum mechanics” rate in terms of benefit to humanity, compared to, say, keeping thirty children in school for an extra year?
B.iv. Peter Singer’s endorsement of effective altruism in the Boston Review (http://bostonreview.net/forum/peter-singer-logic-effective-altruism ) includes some criticism of donations to opera houses; indeed, in a world with poverty and starvation, surely there are better things to do with one’s money? This seems endorsed by GiveWell who list “serving the global poor” as their priority, and in context I doubt this means serving them via the production of poetry for their enjoyment.
I do not agree with this. Life is not merely about surviving; one must have something to live for. Poetry, music, novels—for many people, these are a big part of what makes existence worthwhile.
C. Ideally, I’d love to see the recommendations of multiple effective altruist organizations with different values, all completely transparent about the assumptions that go into their recommendations. Could anyone disagree that this would make the world a better place?
A. I think at least some people do mean that concepts of intelligence and race are, in some sense, inherently meaningless.
When people say
“race does not exist because it is a social construct”
or that race does not exist because
“amount of variation within races is much larger than the amount of variation between races,”
I think it is being overly charitable to read that as saying
“race is not a scientifically precise concept that denotes intrinsic, context-independent characteristics.”
B. Along the same lines, I believe I am justified in taking people at their word. If people want to say “race is not a scientifically precise concept” then they should just say that. They should not say that race does not exist, and if they do say the latter, I think that opens them up to justifiable criticism.
See the reply I just wrote to gjm for an explanation of my motivations.
When I was writing this, I thought the intent to parody would be clear; surely no one could seriously suggest we have to strike strength from our dictionaries? I seem to have been way off on that. Perhaps that is a reflection on the internet culture at large, where these kinds of arguments are common enough not to raise any eyebrows.
Anyway, I went one step further and put “parody” in the title.
I was not trying to suggest that intelligence and strength are as alike as race and strength. Rather, I was motivated by the observation that there are a number of arguments floating around to the effect that,
A. Race doesn’t exist
B. Intelligence doesn’t exist.
and, actually, to a lesser extent,
C. Rationality doesn’t exist (as a coherent notion).
The arguments for A,B,C are often dubious and tend to overlap heavily; I wanted to write something which would show how flawed those arguments are through a reductio ad absurdum.
To put it another way, even if strength (or intelligence or race) really was an incoherent notion, none of the arguments 1-7 in my post establish that it is so. It isn’t that that these arguments are wholly wrong—in fact, there is a measure of truth to each of them—but that they don’t suffice to establish the conclusion.