Selective prosecution is a problem that can be laid at the prosecutor’s doorstep, and if proven, it’s a basis for acquittal. The problem is that it’s hard to prove, not that the prosecutor and cops aren’t responsible. (See my essay “Threat to advocacy from overdeterrence.”
metaphysicist
Do you agree that there is something in reality that is actually green, namely, certain parts of experiences?
No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn’t illusion?
I asked my professor, “But don’t we want to know the probability of the hypothesis we’re testing given the data, not the other way around?” The reply was something about how this was the best we could do.
One senses that the author (the one in the student role) neither has understood the relative-frequency theory of probability nor has performed any empirical research using statistics—lending the essay the tone of an arrogant neophyte. The same perhaps for the professor. (Which institution is on report here?) Frequentists reject the very concept of “the probability of the theory given the data.” They take probabilities to be objective, so they think it a category error to remark about the probability of a theory: the theory is either true or false, and probability has nothing to do with it.
You can reject relative-frequentism (I do), but you can’t successfully understand it in Bayesian terms. As a first approximation, it may be better understood in falsificationist terms. (Falsificationism keeps getting trotted out by Bayesians, but that construct has no place in a Bayesian account. These confusions are embarrassingly amateurish.) The Fischer paradigm is that you want to show that a variable made a real difference—that what you discovered wasn’t due to chance. However, there’s always the possibility that chance intervened, so the experimenter settles for a low probability that chance alone was responsible for the result. If the probability (the p value) is low enough, you treat it as sufficiently unlikely not to be worth worrying about, and you can reject the hypothesis that the variable made no difference.
If, like I, you think it makes sense to speak of subjective probabilities (whether exclusively or along with objective probabilities), you will usually find an estimate of the probabilities of the hypothesis given the data, as generated by Bayesian analysis, more useful. That doesn’t mean it’s easy or even possible to do a Bayesian analysis that will be acceptable to other scientists. To get subjective probabilities out, you must put subjective probabilities in. Often the worry is said to be the infamous problem of estimating priors, but in practice the likelihood ratios are more troublesome.
Let’s say I’m doing a study of the effect of arrogance on a neophyte’s confidence that he knows how to fix science. I develop and norm a test of Arrogance/Narcissism and also an inventory of how strongly held a subject’s views are in the philosophy of science and the theory of evidence. I divide the subjects in two groups according to whether they fall above or below the A/N median. I then use Fischerian methods to determine whether there’s an above-chance level of unwarranted smugness among the high A/N group. Easy enough, but limited. It doesn’t tell me what I most want to know, how much credence should I put in the results. I’ve shown there’s evidence for an effect, but there’s always evidence for some effect: the null hypothesis, strictly speaking, is always false. No two entities outside of fundamental physics are exactly the same.
Bayesian analysis promises more, but whereas other scientists will respect my crude frequentist analysis as such—although many will denigrate its real significance—many will reject my Bayesian analysis out of hand due to what must go into it. Let’s consider just one of the factors that must enter the Bayesian analysis. I must estimate the probability that that the ‘high-Arrogance’ subjects will score higher on Smugness if my theory is wrong, that is, if arrogance really has no effect on Smugness. Certainly my Arrogance/Narcissism test doesn’t measure the intended construct without impurities. I must estimate the probability that all the impurities combined or any of them confound the results. Maybe high-Arrogant scorers are dumber in addition to being more arrogant, and that is what’s responsible for some of the correlation. Somehow, I must come up with a responsible way to estimate the probability of getting my results if Arrogance had nothing to do with Smugness. Perhaps I can make an informed approximation, but it will be unlikely to dovetail with the estimates of other scientists. Soon we’ll be arguing about my assumptions—and what we’ll be doing will be more like philosophy than empirical science.
The lead essay provides a biased picture of the advantages of Bayesian methods by completely ignoring its problems. A poor diet for budding rationalists.
Winning here means accomplishing your goals...
What justifies this unconventional definition of the word? Random House Dictionary offers three senses of “win”:
to finish first in a race, contest, or the like.
to succeed by striving or effort: He applied for a scholarship and won.
to gain the victory; overcome an adversary: The home team won.
Notice that 2 of the 3 involve a contest against another; definition 2 is closer to what’s wanted, but the connection between winning a competition is so strong, that when offering an example, the dictionary editors chose a competitive example.
This unconventional usage encourages equivocation, and it appeals to the hyper-competitive, while repelling those who shun excessive competition. Why LW’s attachment to this usage? It says little for the virtue of precision; it makes LWers seem like shallow go-getters who want to get ahead at any cost.
Not LessWronger Bayesians, in my experience.
What about:
Do you believe that elan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? (emphasis added) — Making Beliefs Pay Rent (in Anticipated Experiences
[i]n your theory A, you can predict X with probability 1...
This seems the key step for incorporating falsification as a limiting case; I contest it. The rules of Bayesian rationality preclude assigning an a priori probability of 1 to a synthetic proposition: nothing empirical is so certain that refuting evidence is impossible. (Isthat assertion self-undermining? I hope that worry can be bracketed.) As long as you avoid assigning probabilities of 1 or 0 to priors, you will never get an outcome at those extremes.
But since P(X/A) is always “intermediate,” observing X will never strictly falsify A—which is a good thing because the falsification prong of Popperianism has proven at least as scientifically problematic as the nonverification prong.
I don’t think falsification can be squared with Bayes, even as a limiting case. In Basesian theory, verification and falsification are symmetric (as the slider metaphor really indicates). In principle, you can’t strictly falsify a theory empirically any more (or less) than you can verify one. Verification, as the quoted essay confirms, is blocked by the > 0 probability mandatorily assigned to unpredicted outcomes; falsification is blocked by the < 1 probability mandatorily assigned to the expected results. It is no less irrational to be certain that X holds given A than to be certain that X fails given not-A. You are no more justified in assuming absolutely that your abstractions don’t leak than in assuming you can range over all explanations.
This throws the baby out with the bathwater; we can falsify and verify to degrees. Refusing the terms verify and falsify because we are not able to assign infinite credence seems like a mistake.
I agree; that’s why “strictly.” But you seem to miss the point, which is that falsification and verification are perfectly symmetric: whether you call the glass half empty or half full on either side of the equation wasn’t my concern.
Two basic criticisms apply to Popperian falsificationism: 1) it ignores verification (although the “verisimilitude” doctrine tries to overcome this limitation); and 2) it does assign infinite credence to falsification.
No. 2 doesn’t comport with the principles of Bayesian inference, but seems part of LW Bayesianism (your term):
This allowance of a unitary probability assignment to evidence conditional on a theory is a distortion of Bayesian inference. The distortion introduces an artificial asymmetry into the Bayesian handling of verification versus falsification. It is irrational to pretend—even conditionally—to absolute certainty about an empirical prediction.
I assume that we’re talking about opinions on factual matters, not personal values. Yes, one’s fundamental (terminal) values I would expect to be pretty stable.
To my thinking, this stance forfeits rational reflection where it really counts most. You’re saying, if I understand you, that you respect people who change their opinions on factual matters, but not on questions of fundamental ethics. This seems to assume, among other things, that people’s values are much more coherent than they are (leaving little leverage for change).
You lose much more status, it is true, when you re-evaluate your terminal values than your factual contentions. That just means the problems of self-confirmation are compounded in ethics, not that they should be ignored there. You can’t be rational yet rigidly maintain your terminal values’ immunity to rational argument.
In the second case, you only need to construct enough alternative outcomes to certify your claim. In the first case, you need to prove a universal statement about all possible theories.
All these arguments are at best suggestive. Our abductive capacities are such as to suggest that proving a universal statement about all possible theories isn’t necessarily hard. Your arguments, I think, flow from and then confirm a nominalistic bias: accept concrete data; beware of general theories.
There are universal statements known with greater certainly than any particular data, e.g., life evolved from inanimate matter and mind always supervenes on physics.
A blog that takes a rational approach to writing improvement is Disputed Issues
So in summary, I am very curious about this situation; why would a community that has been—to me, almost shockingly—consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?
The answer is probably that you overestimate that community’s dedication to rationality because you share its biases. The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?
My take is that neither side in this argument distinguished itself. Loosemore called for an “outside adjudicator” to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these “sins” deserved a ban (no wonder the raw feelings come back to haunt); no honorable person would accept a position where he has the authority to exercise such power (a party to a dispute is biased). Or at the very least, he wouldn’t use it the way Yudkowsky did, when he was the banned party’s main antagonist.
Because some people like my earlier papers and think I’m writing papers on the most important topic in the world?
But then you put your intellect at issue, and I think I’m entitled to opine that you lack the qualities of intellect that would make such recommendation credible. You’re a budding scholar; a textbook writer at heart. You lack any of the originality of a thinker.
You confirm the lead poster’s allegations that SIA staff are insular and conceited.
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
Damn good for someone just out of college—without a degree!
- 14 May 2012 22:10 UTC; 8 points) 's comment on Thoughts on the Singularity Institute (SI) by (
That’s LW “rationality” training for you—”fundamental error of attribution” out of context—favored because it requires little knowledge and training in psychology. Such thinking would preclude any investigation of character. (And there are so many taboos! How do you all tolerate the lockstep communication required here?)
Paul Meehl, who famously studied clinical versus statistical prediction empirically, noted that even professionals, when confronted by instance of aberrant behavior, are apt to call it within normal range when it clearly isn’t. Knowledge of the “fundamental error of attribution” alone is the little bit of knowledge that’s worse than total ignorance.
Ask yourself honestly whether you would ever or have ever done anything comparable to what Yudkowsky did in the Roko incident or what Romney did in the hair cutting incident.
You can’t dismiss politics just because it kills some people’s minds, when so much of the available information and examples come from politics. (There are other reasons, but that’s the main one here.) Someone who can’t be rational about politics simply isn’t a good rationalist. You can’t be a rationalist about the unimportant things and rationalist about the important ones—yet call yourself a rationalist overall.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn’t he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
- 14 May 2012 22:10 UTC; 8 points) 's comment on Thoughts on the Singularity Institute (SI) by (
That’s a restrictive definition of “ax to grind,” by the way—it’s normally used to mean any special interest in the subject: “an ulterior often selfish underlying purpose ” (Merriam-Webster’s Collegiate Dictionary)
But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don’t you detect unacknowledged ambition in Eliezer Yudkowsky?
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias. I don’t think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that’s something else. But the motive—resentment or finances—should probably have little bearing on how a message is treated in serious discussion.
Not to be cynical, PhilGoetz, but isn’t Holden an important player in the rational-charity movement? Wouldn’t the ultimate costs of ignoring Holden be prohibitive?
That’s a reasonable answer.
I don’t like contrarians, but I think honest and fundamental dissent is vital.
A recent development in applied psychology is that small incentives can have large consequences. I think the upvote/downvote ratio is underestimated in importance. The ratio currently is obviously greater than 1; I don’t know how much greater. (Who does?) This creates an asymmetry in which below zero, each downvote has disproportionate stigmatizing power, creating an atmosphere of apprehension among dissenters. The complexion of postings might change if downvoting and upvoting rights were issued so that the numbers tended to be equal. A downvote should simply mean the opposite of an upvote; it shouldn’t be the rare failing mark. Then, the outcome is truly more like a vote than a blackballing.
More accurate would be to say that they wind up excluding only evidence that the system views as unreliable. Whether the evidence is reliable is always the jury’s call—a point I don’t think a quibble because hearsay rules are designed to exclude certain unreliable evidence: that which has particular potential to confuse the jury.
From a rationalist perspective, then, you need to consider not only whether multiple levels of hearsay, admissible at each step, tend to confuse the jury or whether, on the other hand, the jury can competently evaluate the noisiness of the evidence’s transmission. I don’t think multiple levels of admissible hearsay have much credibility with jurors; I think the transmission chain is readily subject to effective attack by the defense. (Every child has played the telephone game.) But here is where considering biases would have been fruitful (and necessary to your thesis).
It isn’t enough to prove chains of hearsay are unreliable. Many kinds of evidence are admitted despite their unreliability: say, the testimony of a witness who’s a known habitual liar. The problem for any rule of admissibility is to weigh the risk of the jury being mislead. Unless you can show the jury is unfit to discount multiple levels of hearsay—with the help of a competent adversary—the proposal is tantamount to having juries base their conclusions on less information than they would otherwise use. Since both parties are subject to the same hearsay rules, it could mean being unable to exonerate a defendant with sound evidence based on multiple levels of hearsay, merely because in general multiple levels of hearsay tend to suffer reduced reliability.