Infatuation would probably be a better word to describe the attitude of the character Aizen’s referring to in that quote, although the subtitle says “admiration.”
Nic_Smith
Ditto that—there have been several posts and comments where people on the site have wanted to conduct a poll, but have been unable to do so in a gracefully and straightforward matter. The “you be the jury—Amanda Knox” thread, this comment by E.Y., and Help Roko Become a Better Rationalist are all examples of this.
Since we already have a wiki running, note that votes take place on wikis all the time; such as the current round of voting on the Wiktionary logo. Maybe the standard policy could be to set up a poll on the wiki and point to it on the main site, and make templates and scripts available to make this easier for everyone. It’d also make the wiki more visible.
Would “bungee cords firmly attached to body, cutie of the interesting sex looking on anxiously” count as reason or as misguided social signaling?
Another possibility for a wall: If this is built, maybe it should have a robots.txt specifying that it doesn’t show up in search results and bounces links from all but a handful of sites (LessWrong proper, the wiki, OvercomingBias, etc). That’ll make it less likely that someone will wander in because they’re interested in getting getting kitten juice out of their upholstery, becoming upset at our godless utility-maximizing contents, and making a big fuss about it. To make sure people that are interested in the main contents of the LW site are aware of it, we can have a (collapsible) banner advertising the new site for registered users, or show the same to people who’s first visit was x-many days ago (determined via cookie).
So, basically, it could be an “open secret,” invisible to the majority of the Internet.
The second you talked about etching knowledge for the future, I immediately thought of The Long Now Foundation’s Rosetta Project—which intends to etch lots of linguistic information onto small metal discs, with lots of copies floating around for redundancy. They’re apparently having production problems, though. I believe the Long Now book actually muses about how a “civilization start up guide” might be something handy to put in a similar format, but don’t have it around at the moment.
Out of curiosity, why uranium glass?
And going off on a tangent, does the entire Long Now Foundation and its projects remind anyone else of Hanson’s “Dreamtime” concept?
I stopped taking melatonin because I’d wake up extremely groggy. Although I also have a (bad?) habit of covering my eyes with a pillow or the corner of a blanket AND I have a light blocking shade in my room, so I was basically waking up in complete darkness. One thing I considered to counteract this was plugging a lamp into a “vacation” timer and having it fire up around 7a or 8a or so. I think I’ll stop at the hardware store and give the whole setup, melatonin and all, another try after reading this post.
- 12 Apr 2010 15:12 UTC; 7 points) 's comment on Case study: Melatonin by (
Mattalast: I learned the truth about this world.
Hamyutz: Yeah? How does that make you feel?
Mattalast: It’s just as I thought. The world is pointless and irrational.
Hamyutz: That’s great! Your prediction was right on the money.
-The Book of Bantorra, Episode 12
“It is therefore highly illogical to speak of ‘verifying’ (3.8 [the Bernoulli urn equation]) by performing experiments with the urn; that would be like trying to verify a boy’s love for his dog by performing experiments on the dog.”—E.T. Jaynes, Probability Theory
“Psychologists tell us everyone automatically gravitates toward that which is pleasurable and pulls away from that which is painful. For many people, thinking is painful.”—Leil Lowndes, How to Talk to Anyone
(Given the context, perhaps a bit of a Dark Arts view.)
Funny quote; what’s the connection to rationality? The character in question not being in touch with reality? The recent melatonin thread? Something else?
Thanks for the suggestion. Actually, for the last couple of days, I’ve been taking about half the dose I tried last year, and I’ve had the shade open a bit—the little bit of light it lets in at night doesn’t seem to bug me as much as it used to. It seems to be working in terms of moving the time I wake up a bit each day, and not so much grogginess. I know Less Wrong isn’t Erowid, but I think my experience trying melatonin sublingually, which is something that was suggested in one of the other comments, might be interesting—IT BURNS! And now the underside of my tongue itches, which is not even something I knew was possible before. As a guess, this is my fault, as a quick search shows there are actually lozenges that are meant to be taken this way, instead of the usual swallowed tablets.
My apologies in advance for rambling:
To begin, the subject reminds of a bumper sticker I occasionally see on a car around here: “Militant Agnostic: I Don’t Know, And You Don’t Either!”* Though there are plenty of examples of irrational religious beliefs leading to bad results, nonetheless I am not convinced that rationality is most useful when applied to (against?) religion. Just off the top of my head, applying it to politics directly (per Caplan’s Myth of the Rational Voter), or even something as mundane as climate (one way or the other), would yield greater dividends, as, absent a religious war among the G-8 (unlikely), improved rationality in these area should help preserve and improve economic growth, which in turn should fuel funding and legal friendliness toward anti-aging research (including cryonics). It’s boring but true—we can worry about religion later.
How to sort people by level of rationality has been on my mind quite a bit lately, because LW has previously discussed the power of rationalist organizations. We probably haven’t identified any way to sort people into various levels of rationality, “at a glance”, without dramatic and extensive testing, if only because such a tool would be immensely powerful. (IIRC, opinions differ.) The question that’s vexing me is: how do you approach a suspected rationalist or semi-rationalist and try to recruit them to a random cause? I’ve so far thought of “have a cause that has a wide appeal,” but, not having buckets of money, am somewhat at a loss as to what this might be. If rationality really is the art of winning, and if a rationalist group ought to do better at achieving its goals, it should be possible to test such ways of rationality-sorting by making groups out of suspected rationalists and having them work on some goal. “Would you like to join my guild in World of Warcraft?” doesn’t seem like it’s going to cut it. Going back to the original topic at hand, if you think that theism or atheism is such a great indicator, why not use it to take over the world? (Well, EY does have a group of about 80% atheists here, so maybe that’s what he DID).
This bring me to my own religious beliefs. Strangely enough, I moved from Catholic to Deist after reading most of the Old Testament (I skipped Song of Solomon and some of the parts where they were going through lineage or inventory or whatever). On a meta-level, although I didn’t realize it at the time, that should not be a possible effect of a substantial portion of the “word of God”. OTOH, I am somewhat surprised that no one else seems to have brought up the concept of the free floating belief.
In turn, this brings me to a what’s already been brought up in some other comments, but I think needs more emphasis: there are different degrees of irrationality in religion. Suppose that God exists and wants to get a message to you. Would it make sense to go through layers and layers of generations and bureaucracy, knowing, as we do, that humans are corrupt and/or biased, and subject to making mistakes in any case? The probability that the message would arrive as intended is low. And we also see conflicting claims of direct divine revelation even in the modern world. This seems, more or less, like Paine’s suggestion that religious claims are “hearsay.” I would very cautiously suggest that the more “applied hearsay” a religion has, the less rational it is.
*Looking it up online, the bumper sticker actually seems to be from a [political] progressive website. Leaving aside modern progressives, I just so happen to be reading The Cult of the Presidency, which depicts early 1900s Progressives as utterly insane, mostly due to religious reasons, believing that they had been appointed by God to… make people better… somehow… and cause wars, both literal and figurative. The book is published by Cato, take that as you may.
In case anyone misinterprets that last sidenote as a subtle jab: the book also says that many, not all, of these people switched sides roundabout (IIRC) the 50s through 70s, so no, it isn’t.
I suppose my overly economical view offended. Sorry.
I would prefer a world where such conflicts and suffering did not exist. However, it still does not follow that this is where the most effort should be expended. You are talking about dramatically changing the religious beliefs of billions over a few decades. I’ve suggested that tweaking the political beliefs of some hundreds of millions, already somewhat educated, roughly over the same time period or perhaps a bit longer, may be more doable.
If you have some argument for why anti-aging research will help people more in the long term, great, lets here it.
Ok: people have value—human capital, if necessary—that compounds with time: knowledge, social ties, personal organization, etc. Currently, this is greatly offset by the physical and mental decline of aging. If we could undo and prevent that decline, people would have the opportunity to be unimaginably productive. The problems that you’ve mentioned are difficult now, but they’d be easier after someone spent a second lifetime dedicated solely to working on them. Furthermore, the management of physical and financial capital across great periods of time is limited—there isn’t anyone that can realistically oversee 300+ year projects and make sure they turn out right. All of this is of value not only to the individual whose life is extended, but to others as well. Admittedly, cryonics doesn’t fall into this story perfectly, although a political environment that’s better for anti-aging in general should also be better for cryonics.
I will also confess that I don’t want to die. You shouldn’t either.
If you can’t feel secure—and teach your children to feel secure—about 1-in-610,000 nightmare scenarios—the problem isn’t the world. It’s you.
-- Bryan Caplan
Shamisen deserves an honorable mention. Although he only has one speech, he’s a good enough philosopher that upon being introduced he manages to sidetrack the brigade members into a debate over the nature of conversation and away from the fact that, you know, _he’s a talking cat_. - TV Tropes, “The Philosopher”
[Connections to rationality: Focus, taking action, and conversation style.]
I just read Outliers and I’m curious—is there anything that would have taken 10000 hours in the EEA that would support Gladwell’s “rule”? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that’s needed to succeed in something make sense?
Oh, by no means did I want to suggest that Gladwell has a forte in evolutionary psychology; if he does, there’s nothing to indicate it in what I’ve read. It’s clear that he glosses over many of the details in his work, perhaps dangerously so. And the entire point of Outliers is that social environment is important to success; not exactly an earth-shattering insight, there’s a negative Times review that’s spot on.
That said, Gladwell says he originally got the idea for 10000 hours from Ericsson and Levitin. At worst, at this point, I think it’s somewhat plausible. I still have a lot more searching to do on the subject, but I am interested in what evolutionary psychology might say about the idea—alas, I’m also not a evolutionary psychologist, so I don’t know that either.
Edit: Of course, what I’m really interested in is “Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?” I’m not interested in the viewpoint of evolutionary psychologists on skill acquisition per se.
“Admiration is the state furthest from understanding.”—Sosuke Aizen, Bleach