Why vote down this simple question? Is it a point of sensitivity—sufficient to drive Nesov to the passive voice? Don’t other readers want to know who decides forum policies?
common_law
why discussion of potential LW features didn’t in-advance-to-me seem very publicky
Do we really need another layer of verbal obfuscation, particularly of the cutesy variety?
I like to make things explicit sometimes.
Although rarely.
For example, I wish you’d be more explicit about your disillusionment with LW culture. A few months ago, you were a loyal oppositionist, advising LW on how it can get bigger, while cheering Yudkowsky when he autocratically manipulated the placement of discussion items.
Remember? You advocated for contrarianism; I (or an alter) advocated for dissent. And now, you’ve become the dissent I advocated if not in content at least in form.
Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply)
That’s exactly the kind of consideration that should lead people to downvote responses to “trolls.” If you think someone is stupidly “feeding trolls,” you should downvote them.
It seems that E.Y. is miffed that readers aren’t punishing troll feeders enough and that he’s personally limited to a single downvote. As an end-run around this sad limitation, he seeks to multiply his downvote by 6 by instituting an automatic penalty for this class of downvotable comment.
Nothing is so outrageously bad about troll feeding that it can’t be controlled by the normal means of karma allocation. The bottom line is that readers simply don’t mind troll feeding as much as E.Y. minds it; otherwise they’d penalize it more by downvotes. E.Y. is trying to become more of an autocrat.
- 3 Sep 2012 20:45 UTC; -1 points) 's comment on Meta: What do you think of a karma vote checklist? by (
This rule is asinine.
Indeed. But why is our Rational Leader overreacting to what would seem a minor issue? The question bears analysis. Have some of the public exposures of SIAI left him feeling particularly vulnerable to criticism?
Your checklist omits the main purpose actually served by upvoting and downvoting; determining the LW consensus. The main factor (but not the only factor) determining a vote is whether the rater agrees with the poster. Perhaps that’s as it should be, since without it, LW could never evolve a party line (so to speak).
You could add agree/disagree to the list, but that would undercut your purpose, focusing on quality rather than agreement. But it isn’t abstract quality, if there is such a thing, that LW is after: it wants to advance its brand of rationalism, which requires deciding what that brand is. This is the main function of voting and karma, but it is subject to much denial.
- 3 Sep 2012 20:45 UTC; -1 points) 's comment on Meta: What do you think of a karma vote checklist? by (
I experiment with these things. Based on a period of years, during which I occupied different personae, it’s clear to me that I can get upvoted reliably if I clearly articulate and intelligently apply the LW-consensus analyses. I will get downvoted severely if I—stupidly or intelligently—articulate an original position that contradicts the LW consensus. As I say, I don’t see how it can be otherwise, if LW is to function as a community with consensus views on eclectic matters.
I wonder what you mean by “in theory.” If that means “according to the standard LW rhetoric,” I agree.
it’s clear to me that I can get upvoted reliably if I clearly articulate and intelligently apply the LW-consensus analyses. I will get downvoted severely if I—stupidly or intelligently—articulate an original position that contradicts the LW consensus. --common_law.
Let me substantiate. Here’s a post where I recently articulated and applied the LW “line.”
On the other hand, a post in this thread, where I advance an idea that should be important to to people who build a rationalist community:
The main factor (but not the only factor) determining a vote is whether the rater agrees with the poster. Perhaps that’s as it should be, since without it, LW could never evolve a party line (so to speak).
Not to be immodest, but this insight is the product of years of watching and experimenting with LW, and I only reached it recently. If it’s true, it’s important because LW is obstructed in constructing a rational community when it ignores the primary function of its “institutions” and substitutes idealistic thinking (‘an upvote means you want more of the same’) for a functional analysis.
Once you see the actual role of karma, you might realize that it couldn’t be otherwise. A a massive intellectual community must find a way to evolve a dynamic consensus. It requires objective incentives to coordinate on a singe outlook (or on a narrow spectrum of outlooks).
In point of fact, this how leading LWers sometimes speak. For example, LukeProg argued that the community isn’t a cult around E.Y. because early on, his posts were upvoted in some cases when E.Y.’s were downvoted. LukeProg’s comment implies LW took an ideological or practical direction through the karma mechanism.
In this comment I’m not evaluating the karma mechanism but pointing out that it is L.W.’s soul.
Will Newsome earned his karma, and he is now entitled to spend it as he pleases. Any interference with that right would be dishonorable, a moral breach of contractual obligation. Libeling him as a SuperTroll is scarcely better; posting provocative comments does not make a troll simply because it’s mildly annoying. A malicious or disruptive intent in required, and that’s patently absent.
[A few months ago, Will Newsome corrected E.Y.’s definition of “troll”; E.Y. called one Loosemore a troll on account of the latter’s being a liar (which he was even less than a troll). Correcting E.Y. turned Will Newsome into something of an overnight authority on the definition of “troll.” This is unfortunate, since Will’s understanding shows itself a bit defective when it faces sterner tests than Loosemore. Newsome is more trollish than Loosemore, but Newsome is no troll.)
The article is obviously embarrassing to E.Y. If he didn’t want to see this essay’s Google rating improve, it wasn’t about some general principle regarding “trolling.” That’s a pretty pathetic attempt at an excuse. It was something about this article. But what? Everyone thinks it’s the “moral” aspect. That may be part of his worry: if so, it suggests that the SIAI/Less Wrong complex has a structure of levels—like say, Scientology—where the behavior of the more “conscious” is hidden from less-conscious followers.
But let me point out a specific revelation, not so prominent in the article but really more important for assessing SIAI and LW.
The messianic Mr. Yudkowsky also helped attract funding from his friend Peter Thiel, an early Facebook investor and noted libertarian billionaire whom Forbes pegs as the 303rd richest person in America. The Thiel Foundation, Mr. Thiel’s philanthropic group, has donated at least $1.1 million to SIAI, more than four times its next largest donor. (The nonprofit’s Form 990 from 2010 shows assets of $462,470.)
How do more leftwing members of the SIAI establishment feel about building an organization funded by (to realists, read “controlled by”) an ultrarightwing billionaire? (It raises questions like is the “politics is mindkiller” trope in place to avoid alienating Mr. Thiel, who would be unimpressed by the anti-libertarianism of a considerable minority on LW.)
E.Y. has built a mystique about himself. Here’s this self-schooled prodigy who has somehow managed to build a massive rationalist community and to preside over a half-million dollar nonprofit, living the good life of working only 4 hours per day (per LukeProg) and in that time, performing only tasks he likes to do, while being paid handsomely? It’s a success story that’s impressive. Even if you don’t think E.Y. is a great philosopher, you have to admire him (at least the way Arnold Schwartzeneger once said he admired Hitler). It does the Yudkowsky myth no service to learn that he had the help of a billionaire, who almost singlehandedly funded his operations. If I’ve puzzled for years about the secret of E.Y. success, now I know it. He has a billionaire friend.
Caveat Unlike many others here, I don’t like that there are billionaires. They’ve made a mockery of American politics, and their whimsical “charitable” support to intellectual factions will make a mockery of American intellectual life.
- 4 Sep 2012 23:03 UTC; -4 points) 's comment on Dealing with meta-discussion and the signal to noise ratio by (
If IQ tests are ‘culturally biased’, then we would expect the highest scoring group to share the same culture as the test writers.
This assumes that if a test is culture biased, it must be biased in favor of the culture as a whole. A test can be culture biased by hyper-valuing a set of skills prominent in one culture, even if that skill set is stronger in some other culture. If IQ is biased, say, toward “academic culture,” even though this is a feature of “white U.S. culture” it may be even more a part of East Asian culture.
What I think your argument shows is that the tests aren’t intentionally biased in favor of one culture specifically. In fact, the studies of early IQ testing shows there was intentional bias (not so much today), but rather than being in favor of the dominant culture, it was against the cultures of particular immigrants. (I’m speaking of the Army Alpha tests.)
Reification seems at work in the studies of the placebo effect for antidepressants. It’s found that except for severe depressions, antidepressants may have “little or no greater benefit than placebo.” The conclusion drawn is either that antidepressants aren’t effective or placebos are effective, when the truth is that most depressions have a short-term course, and the placebo group’s effects include the spontaneous remissions.
One can accept materialism while remaining agnostic about whether it can explain qualia, just like one can accept economics without necessarily requiring it to explain physics.
Materialism is a philosophy which claims the primacy of physics. A materialist can be either a reductionist or an eliminitivist about qualia.
The analogy to economics is bad because economics doesn’t contend that economics is primary over physics, but materialism does contend that the physical is primary over the mental.
The private-language problem ought to tell us that even if raw experiences exist, then we should not expect to have words to describe raw experience.
Wittgenstein’s private-language argument, if sound, would obviate 2c. But 3b is based on Wittgenstein’s account not being successful in explaining the absence of private language. It claims to be a solution to the private-language problem, recognizing that Wittgenstein was unsuccessful in solving it.
The simplest explanation for the universe is that it doesn’t exist. It’s not popular, because the universe seems to exist. Explanations need to be adequate to the facts, not just simple… Since the inexpressibility of qualia can be accounted for given facts about the limited bandwidth of speech, it does not need to be accounted for all over again on the hypothesis that qualia don’t exist.
But can the inexpressibility of qualia be accounted for by such facts as mentioned? That’s the question, since the claim here is that the only supposed fact you have to support your belief that you experience qualia is your inability to doubt that you do. It’s hard to see how that’s a good reason.
Your claim to account for the ineffability of qualia based on expressive limitations is no different. No facts can tell you whether articulating qualia would exceed our expressive limitations because we have no measure of the expressive demands of a quale. The most you can say is that potential explanations might be available based on expressive limitations, despite our currently having no idea how to apply this concept to “experience.”
Whereas the argument for matter is...?
Science. Human practice. Surely not “I just can’t help believing that matter exists.”
we do all see roughly the same thing: we’ve got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
To my disappointment, David Papineau concluded the same, but we can’t compare differences in pictures of the world to differences in the brain structure or function because we can have only a single example of a “picture of the world.” “Pretty much the same sensory organs & brains” is useless because of its vagueness.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
To the contrary, the qualia problem is exactly the sort of problem to which philosophy can provide a decisive answer. For example, that we can’t frame the qualitative differences between persons conceptually should lead philosophers to doubt the coherence of the qualia concept.
Does perhaps the notion that innate concepts might be incoherent create confusion?
The goal we like to aim for here in “dissolving” problems is not just to show that the question was wrongheaded, but thoroughly explain why we were motivated to ask the question in the first place. ¶ If qualia don’t exist for anyone, what causes so many people to believe they exist and to describe them in such similar ways? Why does virtually everyone with a philosophical bent rediscover the “hard problem”
I think this objection applies to Dennett or Churchland’s account but not to mine. The reason the qualia problem is compelling, on my account, is that we have an innate intuition of direct experience. There is indeed some mystery about why we have such an intuition when, on the analysis I provide, the intuition seems to serve no useful purpose, but the answer to that question lies in evolution.
The only answer to “why we were motivated to ask the question?” is the answer to “why did evolution equip us with this nonfunctional intuition?” What other question might you have in mind?
A suggested answer to the evolutionary question is contained in another essay, “The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?”.
But I don’t follow that “merely showing a problem is wrongheaded” would be tantamount to “just [rationalizing] it away.” You would be justified in declining to count a showing of wrongheadedness as a complete dissolution, but that doesn’t make a demonstration of wrongheadedness unsound. The reasonable response to such a showing is to conclude that there are no qualia and then to look for the answers to why they seem compelling.
Why would charities behave any differently than profit-making assets? Do you think that charities have less uncertainties?
The confusion concerns whose risk is relevant. When you invest in stocks, you want to minimize the risk to your assets. So, you will diversify your holdings.
When you contribute to charities, if rational you should (with the caveats others have mentioned) minimize the risk that a failing charity will prove crucial, not the risk that your individual contribution will be wasted. If you take a broad, utilitarian overview, you incorporate the need for diversified charities in your utility judgment. If charity a and b are equally likely to pay off but charity a is a lot smaller and should receive more contributions to avoid risk to whatever cause, then you take that into account at the time of deciding on a and b, leading you to contribute everything to a for the sake of diversification. (It’s this dialectical twist that confuses people.)
If your contribution is large enough relative to the distinctions between charities, then diversification makes sense but only because your contribution is sufficient to tip the objective balance concerning the desirable total contributions to the charities.
Newsome a SuperTroll? Do you really think Newsome contributes less, substantively, than, say, you?