Perhaps the notion of an ‘art of rationality’ is completely misguided. Why are we relying on the skills of individual people who evolved to be irrational when systems can be built for the purpose of giving rational answers? Why walk to the answer when you can drive?
Tom_Talbot
I loved Sherlock Holmes stories from about the age of seven, and liked Jonathon Creek when I was a teenager. These days I like House. The idea of super-rationalists solving problems no normal human can solve is fun and I guess, vaguely inspiring. It’s also entertaining to try to guess the solution before the end, and criticise that solution when it comes.
It seems unlikely that there are going to be very many examples of fiction containing characters using an interesting technique of rationality since—as far as I know—the idea of a real-life “rationality technique” analogous to a martial art technique is original to you, Eliezer. An author might write a story about some group or individual achieving great things and tell us that this group or person studies rationality, but they won’t be able to describe rationality techniques in detail because the concept of rationality techniques didn’t exist before the establishment of Overcoming Bias. All they’re going to be able to do is talk vaguely about monasteries and Grand Master Rationalists. We ought to be careful about interpreting stories to suit our agenda because it would be so easy to fall into the trap of generalising from fictional evidence. Perhaps learning rationality in a monastery under the tutelage of a Grand Master is a terrible way to learn to Think Better, but if we’ve spent all our time discussing books that contain this trope, we might have trouble rejecting it as a bad idea.
(Stiegler has another novel called Earthweb which is about using prediction markets to defend the Earth from invading aliens, which was my introduction to the concept of prediction markets.)
Would it be rational to use prediction markets to defend the Earth from aliens? Using Rationality Tools in an irrational way turns them into Irrationality Tools. The writer is just presenting an idea, he has no way of knowing if that technique would work in that scenario, it might even make the situation worse (does that make it Irrationalist Fiction?)
This seems like the right thread to add some info about zen meditation (“zazen”) for those who are interested in trying it. These are some pages from an american zen master’s website: how to sit zazen and stretches to get to the lotus position.
What I find interesting about zazen is that the emphasis is entirely on posture, with nothing that the practitioner is supposed to think or do, and this is said to have a “balancing” effect on the mind. Having tried it for about a week I can say that it does seem to induce a state of somewhat relaxed alertness, but that the effect varies widely depending on how you were feeling before you did the zazen. Also, it’s extremely difficult to maintain motivation to do it every day as recommended, because of the boredom and discomfort of sitting in lotus, staring at the wall.
As to whether this has any applications to rationality, I’m unsure. According to practitioners it may help in avoiding being overcome by emotion, and increasing concentration, but these claims may be a dishonest attempt at proselytisation by buddhists (“join our religion and gain these benefits”) and I’m having trouble tracking down any satisfactory references. If anyone else has any experience with this I’d be interested to hear about it.
Or start a LessWrong group on Facebook.
Re: incremental implementability—if we ever do organise LessWrong meetups, we should organise rationalist book clubs. How many people here have actually read Judgement under Uncertainty? I confess I never got around to it, though I meant to, but knowing fellow readers might motivate me.
And another thing, when are we going to get a LessWrong wiki? The glut of information here and on OB is unmanageable and we ought to force some kind of order on it—a rationalist curriculum or cheat sheet or something. Having “previously in series” at the top of new posts leads to an impenetrable expanding tree of long blog posts, discouraging new members and confusing lazy and forgetful individuals such as myself.
- 9 Jun 2010 7:38 UTC; 2 points) 's comment on Open Thread June 2010, Part 2 by (
Have you seen this paper, Heilman, Nadeau, Beversdorf. “Creative Innovation: Possible Brain Mechanisms” Neurocase (2003)?
There’s a real kicker in the abstract:
“The observation that [creative innovation] occurs during levels of low arousal and that many people with depression are creative suggests that alterations of neurotransmitters such as norepinephrine might be important in [creative innovation]. High levels of norepinephrine, produced by high rates of locus coeruleus firing, restrict the breadth of concept representations and increase the signal to noise ratio, but low levels of norepinephrine shift the brain toward intrinsic neuronal activation with an increase in the size of distributed concept representations and co-activation across modular networks.”
Speculative, of course. But we like speculative. Suggested exercise: close the curtains, put on some melancholy music, think grim thoughts, then have a go at a hard problem and see if it’s any easier.
Edit: a hard problem requiring creativity, that is.
Hah! When I read the top post I immediately thought of my own everyday struggle with irrationality along the lines of, “I want to get fit and live longer. This requires rationally alloting a certain amount of time to exercise. It’s hard to get motivated to exercise, due to akrasia (laziness). I want to solve the problem of akrasia, so I’ll go to Less Wrong and see what others are saying about it.”
The point is that rationality may have no direct benefits whatsoever, but it is still useful since it helps you choose between, and stick to, behaviours that do have direct benefits.
A classic controversial example: should rationalists go to church?
Why should our emotions always rule our reason? There ought to be a rational way to deal with urges, fatigue and so on. I think the methods currently under discussion are Pjeby’s motivation techniques, cognitive behavioural therapy and possibly meditation. If these lines of inquiry bear fruit, then that should make it possible for people here to muster the willpower to do whatever it is they want to do. At that point we’ll be able to say that any Less Wrong reader who wants to lose weight or whatever and can’t, is failing to be sufficiently rational.
I agree. The wise ought to recognise when you were forced into telling a lie because you valued something more highly than your reputation, and that, in an oddly self-nullifying way, should enhance your reputation. At least among the wise.
EDIT: Gods! I just noticed the accidental similarity to Newcomb’s problem (omega (“the wise”) is allocating reputation instead of money). I’ve been reading Yudkowsky for too long.
Can humans solve these kinds of problems, if so how do we do it?
You could run an experiment. Dose a bunch of lab animals to determine the LD50, or run Ames Tests and that sort of thing. Pork is demonstrably safe, fugu may not be.
By life in general do you mean the lives of humans in general, or just your own life, extended in time?
“Influences” is vague, but I take it you mean:
[my life can be better] produces [positive outlook] and [positive outlook] is another way of saying [life (ie the lives of humans in general) is good]
or: “If I believe that my life can be better, then I believe that life-of-humans-in-general is good.”
I’m not implying that you actually believe this, just that this is what you were saying “positive outlook” meant. Am I right? From this perspective a positive outlook seems like a non-sequitur, since the future quality of my life may not provide much information about the lives of other people. Not to mention the fact that some people have good lives with bright futures and some have bad, hopeless ones, so the notion of life-in-general seems meaningless. From this I conclude that I do not have a positive outlook.
We really need to have a discussion about the polite way to downvote people. I say that the top-level comment shows the right way to moderate, with discussion about the decision to downvote, while this post above mine has been moderated badly. The comment above seems to have undergone some drive-by moderation, with no one saying what he did wrong. One line would do, “This comment downvoted because it is vapid/nonsensical/mistaken” or something. What would be really nice would be if you, anonymous moderators, would set people straight when they made a mistake (as has been done at the top-level) so that we can discuss it in public and avoid it in future. I’m not saying you should explain every downvote, but if you’re hammering someone into the negatives, at least have the guts to say why. Was the post above downvoted because it was bad or because he agreed with the bad post of the top-level commenter? If so, a simple “Your post downvoted for reasons I gave above” would have sufficed.
Downvoting without explanation smacks of laziness or vindictiveness, and degrades the quality of the discussion. If you cannot be bothered to provide an explanation for your downvote, I do not think you should be moderating at all.
I have some conjectures.
1) People tend to hold beliefs for social reasons. For example, belief in theism allows membership of the theist community, the actual existence of a deity is largely irrelevant.
2) For most people, in order to maintain close social relationships it is necessary to maintain harmonious beliefs with nearby members of your social network. Changing your beliefs may harm your social ties.
3) The larger your social network, the more you have to lose by changing your beliefs.
4) Less Wrong encourages questioning and changing of beliefs.
5) On average, women have larger social networks than men.
6) Less Wrong encourages the adoption of strange and boring beliefs, largely based in maths and science.
7) Advocating strange and boring beliefs does not signal high status, rather it signals a misunderstanding of widely accepted social norms, and therefore poor social skills.
8) Much of a woman’s percieved value as a human being is tied to her ability to navigate the social world, men may be forgiven for making the occasional faux pas, women are not. Women are therefore strongly averse to signalling poor social skills.
Some predictions:
1) Willingness to join Less Wrong is inversely proportional to the size of your social network.
2) The exceptions to this rule (Less Wrong members who have large social networks) will be members of fringe groups, where challenges to group beliefs are normal and do not lead to reductions in social status.
3) Less Wrong will never be popular among people with large, mainstream social networks, as long as it advocates self-examination and questioning of recieved beliefs, and promotes discussion of strange and boring beliefs. It will never be popular among women, and the women who do post here are unusual in some way.
ETA: for the sake of complete accuracy, let “fringe belief” be defined as one that is held by <0.1% of the population of the host nation.
I think the answer to your question may be no. I’ve thought on my original post some more and realised that I made a mistake in number (8), one cannot signal poor social skills since signalling is a social skill (it serves no other purpose), a person who cannot signal optimally is a person with poor social skills.
So if a tendency towards telling the truth disrupts a person’s ability to signal optimally, then rationality and popularity must forever remain opposed, since in order to be rational you must give up your ability to signal popular, false beliefs. Even if we say that “rationalism” is only believing the truth—you can lie if you want to—your ability to signal is still disrupted, since the most effective way to signal is to sincerely believe what you’re saying
Instrumental Rationality is a Chimera
the rational way is the way that produces predictably suitable results relative to your desired level of utility and/or investment for that result.
I apologise, but is there some simpler way for you to express this? I do not understand what it means. If you stated it in terms of real-life thoughts, actions and problems I might grasp it better.
the procedures you use will be defined by testing according to some predefined criteria, after being generated through creative and/or problem-solving processes.
Again, is there some simpler way to express this? Who generated the procedures? Was it me or someone else? How are the procedures defined by the testing criteria? Surely a procedure is a sequence of steps toward some goal, and does not have a definition.
And that, more or less, is the skillset (or at least the rough scope of such a skillset) of instrumental rationality.
I remain hopelessly unenlightened.
What are these, exactly?
Logical fallacies, bayes theorem, working knowledge of the scientific method and knowledge of heuristics and biases.
Less Wrong is only a blog. The only thing it can ever hope to achieve is to produce some good essays. The people who read those essays may go on to achieve things, but those only reflect on them personally, not on Less Wrong.
Don’t you have a sense that more is possible?
No, that is expressly disavowed by the site’s title. All progress is incremental; it isn’t possible to be completely right, only to be less wrong, or less often wrong.
Well alright, I was exaggerating for rhetorical purposes. But still, the point stands that “instrumental rationality” does not correspond to anything that anyone can actually do. It is a meaningless label.
This almost seems too obvious to mention in one of Robin’s threads, but I’ll go ahead anyway: success on prediction markets would seem to be an indicator of rationality and/or luck. Your degree of success in a game like HubDub may give some indication as to the accuracy of your beliefs, and so (one would hope) the effectiveness of your belief-formation process.