There is no such thing as absolute certainty, but there is assurance sufficient for the purposes of human life.
John Stuart Mill, On Liberty
He seems to have understood that 0 and 1 are not probabilities.
There is no such thing as absolute certainty, but there is assurance sufficient for the purposes of human life.
John Stuart Mill, On Liberty
He seems to have understood that 0 and 1 are not probabilities.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
“If you can’t explain it simply, you don’t understand it well enough.” Albert Einstein
This relates well to my earlier frustration about the cop-out of vaguely appealing to life experience in an argument, without actually explaining anything.
Hello all,
I’ve been a longtime lurker, and tried to write up a post a while ago, only to see that I didn’t have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I’d gladly turn it into a top-level post here, if that’s in order.
Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html
When I’m debating some controversial topic with someone older than I am, even if I can thoroughly demolish their argument, I am sometimes met with a troubling claim, that perhaps as I grow older, my opinions will change, or that I’ll come around on the topic. Implicit in this claim is the assumption that my opinion is based primarily on nothing more than my perception from personal experience.
When my cornered opponent makes this claim, it’s a last resort. It’s unwarranted condescension, because it reveals how wrong their entire approach is. Just by making the claim, they demonstrate that they believe all opinions are based primarily on an accumulation of personal experiences, even their own opinions. Their assumption reveals that they are not Bayesian, and that they intuit that no one is. For not being Bayesian, they have no authority that warrants such condescension.
I intentionally avoid presenting personal anecdotes cobbled together as evidence, because I know that projecting my own experience onto a situation to explain it is no evidence at all. I know that I suffer from all sorts of cognitive biases that obstruct my understanding of the truth. As such, my inclination is to rely on academic consensus. If I explain this explicitly to my opponent, they might dismiss academics as unreliable and irrelevant, hopelessly stuck in the ivory tower of academia.
Dismiss academics at your own peril. Sometimes there are very good reasons for dismissing academic consensus. I concede that most academics aren’t Bayesian because academia is an elaborate credentialing and status-signaling mechanism. Furthermore, academics have often been wrong. The Sokal affair illustrates that entire fields can exist completely without merit. That academic consensus can easily be wrong should be intuitively obvious to an atheist; religious community leaders have always been considered academic experts, the most learned and smartest members of society. Still, it would be a fallacious inversion of an argument from authority to dismiss academic consensus simply because it is academic consensus.
For all of academia’s flaws, the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history. It is noble and desirable to criticize academic theories, but only as part of intellectually honest, impartial scientific inquiry. Dismissing academic consensus out of hand is primitive, and indicates intellectual dishonesty.
I know this is an old thread, but for any people just now reading it, I thought I’d pass along this bizarre development.
Good article on the abuse of p-values: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong
This sounds very Foucauldian, almost straight out of Discipline and Punish.
I’m not Seth Godin, by the way.
So I finally picked up a copy of Probability Theory: The Logic of Science, by E.T. Jaynes. It’s pretty intimidating and technical, but I was surprised how much prose there is, which makes it surprisingly palatable. We should recommend this more here on Less Wrong.
Noted. In another draft I’ll change this to make the point how easy it is for high-status academics to deal in gibberish. Maybe they didn’t have so much status external to their group of peers, but within it, did they?
What the Social Text Affair Does and Does Not Prove
http://www.physics.nyu.edu/faculty/sokal/noretta.html
“From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science—much less sociology of science—is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a conveniently credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors' ideological preconceptions, and attacked their
enemies″.[13]”
I think the overjustification effect might be at play.
The overjustification effect occurs when an external incentive such as money or prizes decreases a person’s intrinsic motivation to perform a task. According to self-perception theory, people pay more attention to the incentive, and less attention to the enjoyment and satisfaction that they receive from performing the activity. The overall effect is a shift in motivation to extrinsic factors and the undermining of pre-existing intrinsic motivation.
In this case, the reward is status. It’s important to note that the person must anticipate the reward, though. People might explicitly seek status, but subconsciously seeking status might provide enough anticipation to create the effect.
I am taking Eliezer’s definition of “stupidity” to mean increased incompetence in the field wherein the person gained status. In their field, we would expect high competence. Decreased competence in their field would come about from diminished interest in that field, from the overjustification effect.
I recommend a related essay by Hayek, “Competition as a Discovery Procedure.”
I’m a little late to this game, but I spent over an hour, maybe two, comparing the information from the two websites. I had known nothing previously about the case.
My answers: 1: 0.05; 2: 0.05; 3: 0.95; 4: 0.65
So, I feel pretty vindicated. This was a great complement to Kaj Sotala’s post on Bayesianism. With his post in mind, as I was considering this case, I assigned probabilities to the existence of an orgy gone wrong as against one rape and murder from one person. There is strong Bayesian evidence for Guédé′s guilt, but it’s exceedingly weak for Sollecito and Knox. This has really helped the idea of Bayesianism “click” for me.
komponisto, your reasoning is wonderfully thorough and sound. I can corroborate that I deliberately found myself “shutting the voice out” concerning the activity with the mop. You have a great explanation, overall. These two posts of yours are in the running for my all-time favorites.
I know you two are joking, but I will take this opportunity to point out that I really do appreciate the culture of humility on Less Wrong. It’s Yudkowsky’s eighth virtue. I am aware of my profound ignorance as a mere 22-year-old undergrad.
Alternatively, is this a plea for the Skinnerian, egalitarian abolition of honorifics, as from Walden Two?
Prof. Hanson,
I’m 22, and haven’t encountered an opportunity where I thought to use this claim. There are probably instances where it would have been factually appropriate for me to do so, but I’m not inclined to make this point, because it seems to me like a cop-out.
Maybe I would have difficulty in explaining something highly technical or specialized to someone with no background, but crying “life experience” doesn’t seem to be the proper response. It’s far too vague. I would find it more appropriate to direct my debate partner to the specialized or technical material they haven’t studied to understand why my position might be different.
The problem is that nebulously appealing to “life experience” doesn’t even grant how the debate partner is uninformed. It’s as if the person with more “life experience” is on such a higher level of understanding that they can’t even communicate how their additional information informs their understanding. Like Silas Barta, I’m skeptical that even the most informed and educated people would ever be simply unable to explain the basic ideas of even the most difficult material. When this claim is not used to try to explain how their training or experience leads them to a different conclusion, I suspect that more often than not, their differing position isn’t actually about any specialized training, just that their line of argumentation has run out of steam.
In critiquing postmodernism, Noam Chomsky wrote, “True, there are lots of other things I don’t understand: the articles in the current issues of math and physics journals, for example. But there is a difference. In the latter case, I know how to get to understand them, and have done so, in cases of particular interest to me; and I also know that people in these fields can explain the contents to me at my level, so that I can gain what (partial) understanding I may want.”
So, it turns out that power affects what kind of moral reasoning a person uses.
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it’s been discussed here on Less Wrong, but Jonathan Haidt’s research is really great, and relevant to this discussion. Professor Haidt’s work has validated David Hume’s assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
You really should read Taleb; you can probably start with The Black Swan. His terms for these are “Mediocristan,” domains that are described by Gaussian distributions, and “Extremistan,” domains that are described by power laws.
For those interested, Netflix has a new documentary out about the case: https://youtu.be/9r8LG_lCbac
“As one shocked 42-year-old manager exclaimed in the middle of a self-reflective career planning exercise, ‘Oh, no! I just realized I let a 20-year-old choose my wife and my career!’”
-- Douglas T. Hall, Protean Careers of the 21st Century