You probably don’t want to get rationality mixed up in tribal identification, which is the ostensive purpose of such a symbol.
antigonus
- 17 Nov 2011 13:48 UTC; 2 points) 's comment on Less Wrong/Rationality Symbol or Seal? by (
The vast majority of the time you’re playing World of Warcraft, you probably aren’t actually going to be enjoying it. If you experience similar numbness during sex, you probably shouldn’t engage in that, either. (This is probably the simplest of several correct answers to the question, but it applies even if you don’t get addicted.)
People who get dumped want to know their partners’ reasons for breaking up, not the biological etiology of those reasons. They are very likely to take lengthy discourses into the latter as insensitive, obfuscatory deflections (and probably correctly so).
You may want to look at Brandon Fitelson’s short paper Evidence of evidence is not (necessarily) evidence. You seem to be arguing that, since we have strong evidence that the book has strong evidence for Zoroastrianism before we read it, it follows that we already have (the most important part of) our evidence for Zoroastrianism. But it turns out that it’s extremely tricky to make this sort of reasoning work. To use the most primitive example from the paper, discovering that a playing card C is black is evidence that C is the ace of spades. Furthermore, that C is the ace of spades is excellent evidence that it’s an ace. But discovering that C is black does not give you any evidence whatsoever that C is an ace.
The problem here—at least one of them—is that discovering C is black is just as much evidence for C being the x of spades for any other card-value x. Similarly, before opening the book on Zoroastrianism, we have just as much evidence for the existence of strong evidence for Christianity/atheism/etc, so our credences shouldn’t suddenly start favoring any one of these. But once we learn the evidence for Zoroastrianism, we’ve acquired new information, in just the same way that learning that the card is an ace of spades provides us new information if we previously just knew it was black.
I do suspect that there are relevant disanalogies here, but don’t have a very detailed understanding of them.
I believe he’s trying to draw a distinction between two potential sources of evidence:
The factual claim that people believe zombies are conceivable, and
The actual private act of conceiving of zombies.
Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he’s successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don’t need to ask, “How do I know I’m really seeing blue right now, and not red?” Your justification for your belief that you’re seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is “direct” insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.
A couple of things to note. One is that this evidence is wholly private. You don’t have direct access to his conceptions, just as you don’t have direct access to his perceptions. The only evidence Richard can give you is testimony. Moreover, he agrees that testimony of this sort is extremely weak evidence. But it’s not the evidence he claims that his belief rests on. The evidence that Richard appeals to can be evidence-for-Richard only.
Another thing is that the direct evidence he appeals to is not “neutral.” If p-zombies really are inconceivable, then he’s in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place (in just the same way that seeing red isn’t evidence that you’re seeing blue). So there’s no easy way to set aside the question of whether Richard’s conception is evidence-for-him from the question of whether p-zombies are in general conceivable. The worthiness of Richard’s source of evidence is inextricable from the actual truth or falsehood of the claim in contention, viz., that p-zombies are conceivable. But he thinks this isn’t a problem.
If you want to move ahead in the discussion, then the following are your options:
You simply deny that Richard is in fact conceiving of p-zombies. This isn’t illegitimate, but it’s going to be a conversation-stopper, since he’ll insist that he does have them but that they’re private.
You accept that Richard can successfully conceive of p-zombies, but that this isn’t good evidence for their possibility (or that the very notion of “possibility” in this context is far too problematic to be useful).
You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge. If you go this route, you have to be careful not to set yourself up for easy reductio. Specifically, you’d better not be led to deny the rationality of believing that you’re seeing blue when, e.g., you highlight this text.
I hope this helps clear things up. It pains me when people interpret their own confusion as evidence of some deep flaw in academic philosophy.
Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven’t seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being’s intelligence from average to Alan Turing’s level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But “nil” and “basic computer” are strictly stupider than “average human” and “Alan Turing,” respectively. It’s evidence that a curve like f(x, x-1) - the difficulty of creating a being slightly stupider than yourself given your intelligence level—decreases relatively quickly. But the shapes of f(x, x+1) and f(x, x-1) are unrelated. The one can increase exponentially while the other decays exponentially. (Proof: set f(x, y) = e^(y^2 - x^2).)
See also JoshuaZ’s insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Philosophy courses did, seminar-style analytic philosophy classes in particular. (I wouldn’t say that history of philosophy classes altered the way I thought, though I can totally see how Hume might be shocking to someone very new to the subject.) Aside from the actual content I learned, I got the following out of them:
The mental habit of condensing complicated lines of reasoning into minimal, fairly linear syllogisms, so that all of the logical dependencies and likely points of failure among the premises/inferences become much more obvious.
Relatedly, an eagerness to search for ambiguities in arguments and to enumerate all their possible disambiguations, with an eye for the most charitable/defensible contenders.
An appreciation for fine distinctions underlying seemingly straightforward concepts. (E.g., there are several related but distinct concepts that map onto the notion of a word or sentence’s meaning.) These often have unexpected implications and/or vitiate seemingly plausible inferences.
Not being allowed, on pain of embarrassment or a bad grade, to get away with BSing or relying on unacknowledged, controversial assumptions. You have to be up-front about precisely what you mean and what’s at stake.
Realization of the extreme rarity of knock-down arguments for any view, and the subsequent adjustment to the fact that assessing pretty much every philosophical question involves a robust trade-off of good and bad consequences. Sometimes every view on the table seems to imply something crazy, and you have to learn to accept that. And to accept that sometimes reality is crazy. (Yes, I know that the map is not the territory, etc.)
If arguments are soldiers, then at least learning to let some of your soldiers die—and sometimes even putting them out of their misery, yourself! It’s very common in philosophical writing to go through all of the failed arguments for your view before moving on to the ones you find more promising. Even then, it’s expected that you highlight their most vulnerable spots.
Epistemic humility. I’m much slower to draw hasty conclusions with high certainty on a given topic before I find out the best of what all sides have to offer. I definitely still form fast and intuitive judgments before investigating disputed subjects deeply, but I don’t pretend that they’re likely to be the last word or even novel contributions lacking high-level criticism.
A much richer sense of the space of philosophical views. But then again, probably something analogous holds for most other disciplines (biologists are presumably better-tuned to the space of biological hypotheses). Still, though, philosophical-view-space intersects an unusually large number of things.
I don’t know if you’re in need of any of these things, or if you’re likely to acquire them through a small handful of philosophy classes. Even if you are, whether or not you’d succeed greatly depends on the quality of your teachers and classmates.
Causation, Probability and Objectivity
And like winning the lottery, it doesn’t provide you with a tremendous amount of status. :\
I would call the ‘real reasons’ typically given to be obfuscatory deflections. People seldom know the actual reasons for why they want to break up. More often they are explicitly aware of one of the downstream effects of the actual reason.
I’m sure that’s the case. But my point was that if the real reason for the break-up was “I want to be with someone who possesses quality X that you lack,” then tacking on ”...because evolution made me that way” does not render the reason more real or add an additional, separate reason; it just renders the one reason better explained in a mostly irrelevant way.
When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I’d been. Somebody who tells you they’re suicidal isn’t asking you to talk him out of it; he’s asking you to listen.
Just wanted to say that I relate very strongly to this. When I was heavily mentally ill and suicidal, I was afraid of reaching out to other people precisely because that might mean I only wanted emotional support rather than being serious about killing myself. People who really wanted to end their lives, I reasoned, would avoid deliberately setting off alarm bells in others that might lead to interference. That I eventually chose to open up about my psychological condition at all (and thereby deviate from the “paradigmatic” rational suicidal person) gave me evidence that I didn’t want to kill myself and helped me come to terms with recovering. Sorry if this is rambling.
You mentioned recently that SIAI is pushing toward publishing an “Open Problems in FAI” document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it’s less impactful than expected, what lesson(s) might you draw from this?
I guess I can’t really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don’t mean to offend, but that’s the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board’s upvoting system has over peer review?
Digression into a bunch of theory and science impersonalizes things as well as focussing on ‘me’ instead of ‘you’
Not really. Any evolutionary explanation of why I am repulsed by your physical appearance is going to spend a lot of time dwelling on your physical appearance. And I think the impersonalization bit is the key—it is a ridiculously impersonal digression at a moment of extreme emotional vulnerability on the other person’s part. Most people will interpret impersonal explanations of this sort of emotionally impactful decision as an extremely cold-hearted way of excusing oneself. “I’m sorry I’ve just hurt your feelings. But allow me to explain how this is all just the work of the forces of sexual selection in our ancestral environment...”
The word “cult” never makes discussions like these easier. When people call LW cultish, they are mostly just expressing that they’re creeped out by various aspects of the community—some perceived groupthink, say. Rather than trying to decide whether LW satisfies some normative definition of the word “cult,” it may be more productive to simply inquire as to why these people are getting creeped out. (As other commenters have already been doing.)
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one’s own. The claim that “the smarter you are, the better you are at designing intelligences” can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).
I see the two claims conflated shockingly often, e.g., in Bostrom’s article, where he simply states:
Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.
and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn’t state that outright, but the sentence is at the beginning of the section entitled “Once there is human-level AI there will soon be superintelligence.”) That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn’t imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
I spent a week looking for counterarguments, to check whether I was missing something
What did you find? Had you missed anything?
I anticipate that preference for current gender system to be approximately the same across the sexes (and also fairly widespread).
I’d imagine it’s virtually universal. Transhumanists are a tiny population, and I can’t think of anyone outside that population who would even consider revising such a basic facet of human life. Those few who’ve been posed the question of “Should we add or remove a gender?” in earnest would assuredly respond with an incredulous stare. Maybe some feminist academics have discussed it, though.
If your point is that going on about evolutionary psychology adds to the obfuscation but not to the insensitivity, I disagree. There are often ways of more or less sensitively coming clean about (what one takes to be) one’s true reasons for breaking up. Maybe you wouldn’t go so specific as “you’re too fat,” but you could talk about lack of physical chemistry or whatever without uttering a falsehood or being too misunderstood. But there is no way of sensitively taking your devastated ex aside and handing him/her a Tooby and Cosmides paper to read for homework.
Something has gone horribly wrong here.