My concern is less with the degree to which I wear the rationality mantle relative to others (which is low to the point of insignificance, though often depressing) and more with ensuring that the process I use to approach rationality is the best one available. To that end, I’m finding that lurking on LessWrong is a pretty effective process test, particularly since I tend to come back to articles I’ve previously read to see what further understanding I can extract in the light of previous articles. SCORING such a test is a more squiffy concept, though correlation of my (defeasibly) rational conclusions to the evidence of reality seems an effective measure… though I’ve now run into a concern that my own self-assessment of confirmation bias elimination may not be satisfactorily objective. The obvious solution to THAT problem would be to start publishing process/conclusion articles to LessWrong. I think I may have to start doing so.
SeanMCoincon
The most useful skill I’ve developed has been in meeting immaturity (both in rationale and delivery) with maturity (ditto). I work in a heavily right-wing workplace that refuses to allow anything but Fox News on anything resembling a television. This is my training environment. Even in the presence of highly irrational and emotionally charged convictions, I’ve found that the ability to maintain an uninvested calm and slowly help my partner to make their argument better (through gradual consilience with reality) can result in ACTUALLY CHANGED MINDS. The first step seems, invariably, to point out those counterfactuals that back them away from absolute confidence; when presented as potential improvements (“You’d probably see greater success at decreasing the actual number of abortions if you could find ways to enable people to only purposefully conceive a child.”) even a position they once reviled can seem outright tasteful. The key appears to be presentation of oneself as a potential ally, so as to avoid the “I must engage on all fronts” mentality that prevents meaningful engagement at all.
“What on Earth makes you think monkeys can change into humans?”
It seems—based upon personal experience—that the difference between the rational and the irrational is that the rational at least attempts to present a cogent answer to such questions in a way that actually answers the question; the irrational just gets mad at you for asking.
Racism and sexism are pretty good candidates as well. Prejudice in general would be even more inclusive; one could even consider religion to be a special case of prejudice against reality.
Agreed on all points; I’ve found it interesting in my conversations with anti-evolutionists that even doing the work of dispelling the straw man argument—“monkeys turning into humans”, “why are there still monkeys”, etc. - doesn’t seem to change even their conception of the evolution argument; they STILL think all the science and reason in the world can be summarized as “monkeys turned into humans”. Their degree of investment in opposing that argument may be too great for additional rationality to crack. When/if that becomes apparent, I’ve found the more-effective-yet-less-satisfying counter to be something along the lines of: “America grew out of England, yet England’s still a country.”. Not the most accurate metaphor, granted… but it seems to back their confidence level down from outright absoluteness.
Plus, it’s kinda fun to see their faces turn red. Whoever coined “Sticks and stones can break my bones, but words can never hurt me.” must not have been a rationalist amongst children.
It may be useful to the cause of avoiding one’s own potential happy death spirals (HDSs) to actively attempt to subvert the “my ideas are my children” trope. Perceived ownership of an idea or mental tool may be a prime contributor to HDS thinkery, giving rise to the kind of protectiveness we humans tend to provide our offspring whether or not they deserve it. The fact that our child started the fight with another child doesn’t prevent us from stepping in on OUR child’s side; the fact that our child is demonstrably average doesn’t prevent us from telling complete strangers how intelligent, sweet, talented, beautiful, etc. OUR child is, was, and shall always be, forever and ever, amen.
So too it seems to be with the ideas we feel we own, particularly the ones we ourselves have generated. This impulse is entirely understandable within the context of a species whose primary survival trait is intelligence, with opposable thumbs taking a distant second. Yet to feel ownership of an idea to the point that we feel protective of it seems rationally contraindicated: an idea—anyone’s—should only be valued insofar as it can stand on its own in the uncaring realm of reality… in a making beliefs pay rent kind of way.
So perhaps a good solution to the “How?” of resisting HDSs would be to try to view ideas and mental tools as being both fundamentally borrowed and potentially disposable upon breaking. It’s a nice way of avoiding even the temptation to indulge in ad hominem, as well.
This immediately brings to mind the old adage about it being better to be Socrates dissatisfied than a pig satisfied. I’d imagine, from the pig’s point of view, that the loftiest height of piggy happiness was not terribly dissimilar from the baseline level of piggy contentment, so equating “happiness” to “contentment” would not be an inexcusable breach of piggy logic. Indeed, we humans pretty much have to infer this state of affairs when considering animal wellbeing (“appearance of sociobiological contentment approximates happiness”), as we don’t yet possess any means of engaging animals in philosophical conversation on the subject.
Yet it seems that those who would have us believe that “blissful ignorance” is a good thing as an absolute are confusing contentment with happiness unnecessarily. Happiness registers more as a positive, aspirational value within the context of the human experience range; contentment seems more a negative, absence-of-dissatisfaction value that indicates only that things aren’t going poorly. Doublethink and willful ignorance do not seem to be able to positively provide qualia that contribute to happiness; they can only obscure knowledge of things that are actually going poorly, thus creating a false sense of contentment.
That’s my general counterpoint whenever people speak positively of the “happiness” created by things like religion and opiates. Nothing is being added; your knowledge of reality is being obscured. It’s difficult to see how that approach could be considered a mature option.
“I know I can never be perfect, but that’s certainly not going to stop me from trying.” --Sean Coincon
:D
“And what would be the analogy to collapsing to form a Bose-Einstein condensate?”
...All of them moving into the same compound and acquiring an arsenal seems about right, particularly when you consider the increased chance of violent explosion.
Ha ha, this comment shows up on the Recent Comments feed at right as:
″ Racism and sexism are pretty good
by SeanMCoincon on The uniquely awful example of theism | 0 points ”
THAT certainly couldn’t be misconstrued against me in any way! I think I’ll run for Congress.
Many big-L Libertarians I’ve met—along with those who consider themselves to be trench-fighters for Ayn Rand-ian Objectivism—seem to want to conflate “selfishness” with “enlightened self-interest” for the positive connotations of the latter… yet their rationale for various big-L proposals (such as “let’s turn over national security to corporations, who will certainly never abuse the power to force decisions upon people”) tends to be of the extremely rosy, happy death spiral, declare-anything-that-doesn’t-fit-an-”externality” variety. That seems somewhat removed from any meaning of “enlightened” that approaches sensibility; and that’s coming from a mild, little-l, “A free society means you need a reason to make things illegal” libertarian framing.
Ultimately, I can understand the “It’s So Simple! (tm)” appeal of claiming that selfishness itself is good as an absolute, but delivering that advice only appears to hold true—at either a societal OR individual level—if the scoreboard is measuring relative altruistic effects. A benefit to oneself that derives from (having helped propagate) a mutually self-interested society only qualifies as a benefit relative to 1) a society of self-sacrificial lemmings (which is a bit of a straw man); or 2) no society at all, where there really ARE no externalities and self-interest can be truly self-referent. …I feel I may not be explaining this clearly, so I’ll simply request suggestions and wrap up this comment.
It seems that, instead of trumpeting “selfishness!” as a counterintuitive moral panacea, all that’s really needed for altruism to symbiotically cohabitate with “selfishness” is to use the phrase “rational self-regard” instead, since it doesn’t require you to engage in Ethical-Egoism-esque displays of unnecessary dickishness towards one’s fellow man. …And I feel I may have to try to write an article on that subject if one does not yet exist.
“I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult.”
Is that not what this entire site is accomplishing?
“Could I regenerate this knowledge if it were somehow deleted from my mind?”
Epistemologically, that’s my biggest problem with religion-as-morality, along with using anything else that qualifies as “fiction” as a primary source of philosophy. One of my early heuristic tests to determine if a given religious individual is within reach of reason is to ask them how they think they’d be able to recreate their religion if they’d never received education/indoctrination in that religion (makes a nice lead-in to “do people who’ve never heard of your religion go to hell?” as well). The possibles will at least TRY to imply that gods are directly inferable from reality (though Intelligent Design is not a positive step, at least it shows they think reality is real); the lost causes give a supernatural solution (“Insert-God-Here wouldn’t allow that to happen! Or if He did, He’d just make more holy books!”).
If such a person’s justification for morality is subjective and they just don’t care that no part of it is even conceivably objective… what does that say for the relationship of any of their moral conclusions to reality?
″...Although, do please make the check out to ‘Cash’.”
My favorite part, at which there was actual LOLing:
“•[Imaginary Model Alicorn] acquired a certain level of status (respect for her mind-hacking skills and the approval that comes with having an approved-of “sensible” romantic orientation) within a relevant subculture. She got to write this post to claim said status publicly, and accumulate delicious karma. And she got to make this meta bullet point.”
″… people seem to get a tremendous emotional kick out of not knowing something. ” Could be simple schadenfreude: asserting that “no one” knows a thing, even those demonstrably more intelligent than yourself, has the emotional effect of knocking them down into the same mud in which you already believe yourself to be mired. Not productive, but good solace for those unwilling to be productive.
I find that the realization of consilience can be “as” good as original discovery; for me, the discovery that an idea about the world—even one posited centuries ago—comprehensively makes sense in the context of everything else known about reality is, itself, an original discovery.
It’s just one that’s unique to you or me.
“No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself.”
It seems like a viable means of propagating education about such mistakes—or the mistakes of aspiring rationalists in general—would be to set up (relatively) straightforward scientific experiments that purposefully make a given mistake and then allow students to perform the experiment unsuccessfully. The postmortem for each class/lab would review what went wrong, what wrong looked like, why things went wrong, and so forth. Sort of a “no, seriously, learn from the past” symposium.
Do any of you know of any such existing educational structures in the Bay Area?
Somewhere out in mind design space, there’s a mind with any possible prior; but that doesn’t mean that you’ll say, “All priors are created equal.”
The corrected phrase may be: “All unentangled priors are created equal.”
Oddly, this problem seems (to my philosopher/engineer mind) to have an exceedingly non-complex solution, and it depends not upon the chooser but upon Omega.
Here’s the payout schema assumed by the two-boxer, for reference: 1) Both boxes predicted, both boxes picked: +$1,000 2) Both boxes predicted, only B picked: $0 3) Only B predicted, both boxes picked: +$1,001,000 4) Only B predicted, only B picked: +$1,000,000
Omega, being an unknowable superintelligence, qualifies as a force of nature from our current level of human understanding. Since Omega’s ways are inscrutable, we can only evaluate Omega based upon what we know of him so far: he’s 100 for 100 on predicting the predilections of people. While I’d prefer to have a much larger success base before drawing inference, it seems that we can establish a defeasible Law of Omega: whatever decision Omega has predicted is virtually certain to be correct.
So while the two-boxer would hold that choosing both boxes would give them either $1,000 or $1,001,000, this is clearly IRRATIONAL: the (defeasible) Law of Omega outright eliminates outcomes 2 and 3 above, which means that (until such time as new data forces a revision of the Law of Omega) the two-boxer’s anticipated payoff of $1,001,000 DOES NOT EXIST. The only choice is between outcome 1 (two-boxer gets $1,000) and outcome 4 (one-boxer gets $1,000,000). At that point, option 4 is the dominant strategy… AND the rational thing to do.
Does that makes sense? Or am I placing unfounded faith in Omega?