I hardly think many here would object to love, joy, and laughter being not outlived by the stars themselves: as you say, the critics are not dishonest. As steven points out, any disagreement would seem to stem from differing assessments of the probabilities of stagnation risk and existential risk. If the future is going to be dominated by a hard takeoff Singularity, then it is incredibly important to make sure to get that first AGI exactly, perfectly right at the expense of all else. If the future is to be one of “traditional” space colonization and catostrophic risk from AI, MNT, &c. is negligible, then it’s incredibly important to develop techs as quickly as possible. While the future does depend on what “we” decide to do now (bearing in mind that there is no unitary we), this is largely an empirical issue: how does the tech tree actually look? What does it take to colonize the stars? Is hard takeoff possible, and what would that take? &c. I think that these are the sorts of questions we need to be asking and trying to answer, rather then pledging ourselves to the “pro-safety” or “pro-technology” side. Since we all want more-or-less the same thing, it’s in all of our best interests to try to reach the most accurate conclusions possible.
Z_M_Davis
I sometimes suspect that mass institutionalized schooling is net harmful because it kills off personal curiosity and fosters the mindset that education necessarily consists of being enrolled in a school and obeying commands issued by an authority (as opposed to learners directly seeking out knowledge and insight from self-chosen books and activities). I say sometimes suspect rather than believe because my intense emotional involvement with this issue causes me to doubt my rationality: therefore I heavily discount my personal impressions on majoritarian grounds.
I don’t actually believe it as such, but I think J. Michael Bailey et al. are onto something.
I lean toward the politically correct side because it’s the side that [...]
Taboo side. Complex empirical issues do not have sides. Humans, for their own non-truth-tracking reasons, group into sides, but it’s not Bayesian, and it has never been Bayesian.
Or we think we group up into sides, but I’m not even sure that’s true. You write that the egalitarians are nuanced and present evidence, whereas the human biodiversity crowd (or whatever words you want to use) are just apologists for their favorite narrative, but there are a lot of people who have the exact opposite perspective: that the hbd-ers are honest and nuanced and the egalitarians are blinded by ideology. But in fact, there are no sides physically out there: rather, there are only various people who have studied various facets of the topic to various degrees and who believe and profess various things for various reasons. And this question of what various people believe is distinct from the question of what’s actually true.
I realize that this kind of aggressive reductionism isn’t very predictively useful—that indeed, I’m probably just a few steps above saying, “Well it’s all just quarks and leptons anyway.” But sometimes it is worth saying just that, if only to wrench ourselves free of this adversarial framing so that we can actually look at the data.
It’s [...] humane to assume
Humaneness is central to policy, but it should have nothing to do with our beliefs.
they have weak personalities or fall into the “beta male” category of weak, nerdy men who [...] don’t have the requisite greedy, self-interested [...] most people here don’t value social status enough and (especially the men) don’t value having sex with extremely attractive women that money and status would get them. [...] Essentially, too much Linux forums, not enough playboy is screwing you all over.
The utility function is not up for grabs. Why should we care about “success” if the price of “success” is being a greedy, self-interested asshole? You know, maybe some of us care about deep insights and meaningful, genuine relationships, which we value for their own sake. Maybe we don’t want to spend our days plotting how to grind the other guy’s face into the dust. Maybe we want the other guy to be happy and successful, because life is not a zero-sum game and our happiness does not have to come at the expense of anyone else. Tell us how to optimize for that. Don’t tell us that we’re nerds; we already knew that!
Rationalists should win, full stop and in full generality. Not “triumph over others in some zero-sum primate pissing contest,” win.
ADDENDUM: See my clarification below.
The most frequently useful thing I’ve gotten out of Overcoming Bias is not a technique or lesson so much as it is an attitude. It’s the most ridiculously simple thing of all: to be in the habit of actually, seriously asking: is (this idea) really actually true? You can ask anyone if they think their beliefs are true, and they’ll say yes, but it’s another thing to know on a gut level that you could just be wrong, and for this to scare you, not in the sense of “O terror!--if my cherished belief were false, then I could not live!” but rather the sense of “O terror!--my cherished belief could be false, and if I’m not absurdly careful, I could live my whole life and not even know!”
As soon as you start talking about your “honor as an aspiring rationalist”, you’re moving from the realm of rationality to ideology.
Well, sure, but the ideological stance is “You should care about rationality.” I should think that that’s one of the most general and least objectionable ideologies there is.
Like I said, I don’t think this question matters and I’m mostly indifferent to what the answer actually is. I’m just trying to protect the people who do care.
But I do care, and I no longer want to be protected from the actual answer. When I say that I speak from experience, it’s really true. There’s a reason that this issue has me banging out dramatic, gushy, italics-laden paragraphs on the terrible but necessary and righteous burden of relinquishing your cherished beliefs—unlike in the case of, say, theism, in which I’m more inclined to just say, “Yeah, so there’s no God; get over it”—although I should probably be more sympathetic.
So, why does it matter? Why can’t we just treat the issue with benign neglect, think of ourselves as strictly as individuals, and treat other people strictly as individuals? It is such a beautiful ideal—that my works and words should be taken to reflect only on myself alone, and that the words and works of other people born to a similar form should not be taken to reflect on me. It’s a beautiful ideal, and it seems like it should be possible to swear our loyalty to the general spirit of this ideal, while still recognizing that---
In this world, it’s not that simple. In a state of incomplete information (and it is not all clear to me what it would even mean to have complete information), you have to make probabilistic inferences based on what evidence you do have, and to the extent that there are systematic patterns of cognitive sex and race differences, people are going to update their opinions of others based on sex and race. You can profess that you’re not interested in these questions, that you don’t know—but just the same, when you see someone acting against type, you’re probably going to notice this as unusual, even if you don’t explicitly mention it to yourself.
There are those who argue—as I used to argue—that this business about incomplete information, while technically true, is irrelevant for practical purposes, that it’s easy to acquire specific about an individual, which screens off any prior information based on sex and race. And of course it’s true, and a good point, and an important point to bear in mind, especially for someone who comes to this issue with antiegalitarian biases, rather than the egalitarian biases that I did. But for someone with initial egalitarian biases, it’s important not to use it—as I think I used to use it—as some kind of point scored for the individualist/egalitarian side. Complex empirical questions do not have sides. And to the extent that this is not an empirical issue; to the extent that it’s about morality—then there are no points to score.
It gets worse—you don’t even have anywhere near complete information about yourself. People form egregiously false beliefs about themselves all the time. If you’re not ridiculously careful, it’s easy to spend your entire life believing that you have an immortal soul, or free will, or that the fate of the light cone depends solely on you and your genius AI project. So information about human nature in general can be useful even on a personal level: it can give you information about yourself that you would never have gotten from mere introspection and naive observation. I know from my readings that if I’m male, I’m more likely to have a heart attack and less likely to get breast cancer than would be the case if I were female, whereas this would not at all be obvious if I didn’t read. Why should this be true of physiology, but not psychology? If it turns out that women and men have different brain designs, and I don’t have particularly strong evidence that I’m a extreme genetic or developmental anomaly, then I should update my beliefs about myself based on this information, even if it isn’t at all obvious from the inside, and even though the fact may offend me and make me want to cry. For someone with a lot of scientific literacy but not as much rationality skill, the inside view is seductive. It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception; I’m a gloriously androgynous creature of pure information!” But if you actually want to achieve your ideal (like becoming a gloriously androgynous creature of pure information), rather than just having a human’s delusion of it, you need to form accurate beliefs about just how far this world is from the ideal, because only true knowledge can help you actively shape reality.
It could very well be that information about human differences could have all sorts of terrible effects if widely or selectively disseminated. Who knows what the masses will do? I must confess that I am often tempted to say that I have no interest in such political questions—that I don’t know, that it doesn’t matter to me. This attitude probably is not satisfactory for the same sorts of reasons I’ve listed above. (How does the line go? “You might not care about politics, but politics cares about you”?) But for now, on a collective or political or institutional level, I really don’t know: maybe ignorance is bliss. But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
Do please try to understand that for many men, lack of sex is sort of like missing your heroin dosage—at least that’s the metaphor Spider Robinson used. Anyone in this condition is probably going to go on about it, and if you’re not starving at the moment you should try to have a little sympathy.
Of course it is well known that men on average have a higher sex drive than women on average, but I think the analogy to drug addiction or starving is ridiculous hyperbole. For just one thing, starving people and heroin addicts do not have the option of simply learning to masturbate.
[notice how I objectified her there, leaving behind the language of a unified self or person in favour of a collection of mechanical motivations and processes whose dynamics are partially determined by evolutionary pressures, and what a useful exercise this can be for making sense of reality]
I still don’t think you understand what feminists mean by objectification. It’s not the same thing as cognitive reductionism, which I think hardly anyone here would object to. I mean, yes, minds are causal systems made of parts embedded in the universal laws of physics and can be understood as such. Everyone knows that!---and given that everyone knows that, you should be able to deduce that whatever it is people really mean when they criticize this objectification-thing, it has to be something other than cognitive reductionism.
Let me explain what I understand by objectification. So, even though (as everyone here already knows) everything that exists, exists within physics, we still find it useful and necessary to distinguish structures within physics which we think are conscious and intelligent (whatever it is we refer to with those words), which we call minds or people, and structures that are not, which we call objects. So when we express the proposition that objectification is unethical, we mean that we have special ethical standards for dealing with physical-structures-deemed-people that do not apply when dealing with physical-structures-deemed-objects. For example, in matters of sexual relations, you shouldn’t deceive people into doing things that they wouldn’t on reflection want to do if they were better informed; rather, when dealing with a person, you should take into account the desires, beliefs, and autonomy of that person, even though (as everyone already knows) none of these things are ontologically fundamental.
Now, perhaps you don’t hold this ethical standard yourself. In light of the terrible, horrible, no good, very bad truth, there’s probably not a whole lot feminists can do to talk you into it. But in order to have a sane discussion, you should at least understand what it is your fellow discussants actually believe. And I really don’t think you do.
What has been said on LW about seduction is the aggregate state of the evidence. The discussions about seduction on OB and LW are the most unbiased summary on the topic I know. Take an intersection of Robin’s signaling theory, Eliezer’s essays on gender, and the skeptical-empirical knowledge of pickup artists. That is the truth insofar approximable.
For one thing, no blog is large enough to contain the aggregate state of the evidence about anything. For another, don’t you suppose some women might know something about this topic that you and your sources have missed? It may help to meditate on “Reversed Stupidity is Not Intelligence”—even if some critics irrationally discount the domain knowledge of PUAs, this is no excuse for irrationally discounting the critics’ domain knowledge.
Now AFAICT you refuse to accept OB and LW as ‘extraordinary institutions’.
Argument screens off authority. I agree that this is a wonderful blog, but it doesn’t mean that you should expect people to just accept the majority opinion here simply on the grounds that it’s such a wonderful blog. Especially on a mind-killing topic like gender, about which I fear no one’s rationality can simply be trusted. The authority of biologists derives from massive amounts of empirical evidence and many years of intense study, and even then, I do not think you should automatically trust everything a biologist says about anything to do with biology; you may have domain knowledge of your own that bears on some particular question. A comment thread full of smart people who profess truthseeking has still less authority.
You can afford to do this because inaccurate beliefs may cost you little in this area.
Isn’t this a fully general counterargument? It might similarly be said that you can afford to hold the opinions you do because inaccurate beliefs may cost you little in this area. And it gets us nowhere, either way.
Why?
The current wording implicitly suggests that the normative human is sexually attracted to women, whereas in fact this is only true of approximately half the population. I understand that this interpretation is not what was explicitly intended, but clear language is important, especially if one is going to hold forth on “unconscious map computation”.
I sometimes wish that certain men would appreciate that not all men are like them—or at least, that not all men want to be like them—that the fact of masculinity is not necessarily something to integrate.
“Cinderella, dressed in yella / Had a theory she would tellya / How much evidence could she ignore / Before her listeners start to bore? / One, two, three—”
If it really, truly didn’t matter to us, then it wouldn’t be a sensitive subject in the first place. When someone makes a wildly inaccurate estimate of the price of tea in China, no one gets outraged or asserts in boldface that the price of tea in China really doesn’t matter.
This is probably a standard tactic, but maybe I phrase it in helpful words.
My akrasia problems have gotten significantly better since I stopped thinking so much in terms of discipline and more in terms of not-being-stupid. One imagines that a race of expected utility maximizers would use the same word for I should and I want. If I think that I ought to to X, then I can just—do X, because I’ve decided that X is the right thing to do. It’s not a matter of forcing myself to do things that I don’t want to do (that would just be stupid; the entire point of instrumental rationality is to get us more of what we want); it’s a matter of wanting to do good things. Don’t raise the pressure; lower the resistance! Cf. “Inner Goodness.”
Of course I’m a human and it doesn’t really work that way, but I am doing ever so much better than I was this time last year. Because of this community, I’ve just been continually obsessing about rationality for the last year and a half, and I think I’ve finally just passed the threshold where it starts to yield practical benefits. However, I’m an unusual person along several dimensions and I’ve faced very strange personal circumstances in the past year and a half, so I don’t expect my experiences to generalize too much, in this domain or others.
Anyone got any other examples of things just about everyone here has seen the folly of, even though they’re widespread among otherwise-smart people?
Naïve free will, and moral realism. Related to religion, but, I think, distinct.
Shouldn’t this be “I’m unique [...]”?
One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that “all symmetric matrices are orthonormally diagonalizable,” and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don’t help you very much.
Following Nominull and Furcas, I bite the third bullet without qualms for the perfectly ordinary obvious reasons. Once we know how much of what kinds of experiences will occur at different times, there’s nothing left to be confused about. Subjective selfishness is still coherent because you’re not just an arbitrary observer with no distinguishing characteristics at all; you’re a very specific bundle of personality traits, memories, tendencies of thought, and so forth. Subjective selfishness corresponds to only caring about this one highly specific bundle: only caring about whether someone falls off a cliff if this person identifies as such-and-such and has such-and-these specific memories and such-and-those personality traits: however close a correspondence you need to match whatever you define as personal identity.
The popular concepts of altruism and selfishness weren’t designed for people who understand materialism. Once you realize this, you can just recast whatever it was you were already trying to do in terms of preferences over histories of the universe. It all adds up to, &c., &c.
- 29 Sep 2009 5:31 UTC; 1 point) 's comment on The Anthropic Trilemma by (
I count 6+ comments from others on meta-talk, 8+ down-mods, and 0 [sic] explanations for the errors in my solution. Nice work, guys.
If it is in fact the case that your complaints are legitimately judged a negative contribution, then you should expect to be downvoted and criticized on those particular comments, regardless of whether or not your solution is correct. There’s nothing contradictory about simultaneously believing both that your proposed solution is correct, and that your subsequent complaints are a negative contribution.
I don’t feel like taking the time to look over your solution. Maybe it’s perfect. Wonderful! Spectacular! This world becomes a little brighter every time someone solves a math problem. But could you please, please consider toning down the hostility just a bit? These swipes at other commenters’ competence and integrity are really unpleasant to read.
ADDENDUM: Re tone, consider the difference between “I wonder why this was downvoted, could someone please explain?” (which is polite) and “What a crock,” followed by shaming a counterfactual Wei Dai (which is rude).
(I trust I will be forgiven for the overwrought and repetitive prose that follows. In my defense, on this issue, I really do try to think in such terms, and arguably all this drama is a large part of why the method works as well as it does.)
My improvement program, which has been working fairly well so far, although I am still continually refining things as I will detail below, is based on the opposite principle. Rather than setting explicit measurable goals, I try to continually remind myself that every minute and every dime is precious, and every minute and every dime that you don’t spend doing the best thing you can possibly be doing is a mark of sin upon your soul, and furthermore that this is not some extremist dictate, but rather a tautology—that’s what the word “best” means: that which you should be doing. Rather than goals to satisfice, I want to have a utility function to maximize. I do not place myself under some dreaded burden to fulfill some oath: I’m just trying to not be stupid. There is no such thing as “leisure”—everything is booked under “Dayjob” or “Lifework” or “Education” or “Maintenance,” for every book that you read makes you stronger, every problem that you solve increases your beauty, every line that you write is another stitch in your ball gown. It is not: “Once I finish my homework, I can watch the teevee or play flash games on the internet.” As an autodidactic generalist, I either have no homework, or an infinite amount of homework, depending on how you want to phrase things. I don’t want to watch the goddam teevee! Mathematics is more fun than those moronic flash games! Slacking off is not a guilty indulgence; it’s just stupid, and the entirety of my powers are now devoted to the monumental task of not-being-stupid. I recognize no other intertemporal selves to bargain with—I have but one Self, a timeless abstract optimization process to which this ape is but a horribly disfigured approximation. There have been times when I was tempted to go buy an ice cream (“frozen yogurt”) and even took a few steps towards the shop before thinking—is this really what I want? Living as I am on short time, wouldn’t have rather have that four dollars which is equivalent to twenty-four minutes at my crappy dayjob? I prefer the money, so I turned and walked back to my car.
All this is not to say I am in no need of more structure—it would be helpful to keep some sort of schedule or timelog, not in the form of an oath to another self from another time, but simply as a guideline to give direction to my full autodidactic fury. I’ve experimented with this and that, to no notable success so far—but I’m going to keep hacking away at this; sunk costs can’t play into your decision theory, so no number of failures can discourage an expected utility maximizer, though such a thing might happen to a goddam ape.
Am I kidding myself?---in some sense, maybe a little. How much writing have I done?--when allegedly my lifework was supposed to be a work of fiction. Does it only seem like I’ve been being more efficient, because I’ve been doing so much math and programming which leaves a paper trail, as compared to reading which doesn’t? But for once in my life, induction is on my side now: I’ve gotten better before, so I can do so again. I don’t watch teevee any more, and I don’t play flash games—I’m not even tempted. I don’t know what my limits are. So help me.