LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh(Jim Babcock)
The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the “knowing biases can hurt you” problem, and while it’s obvious if put in formal terms, it’s counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.
I would expect success in a prediction market to be more correlated with amount of time spent researching than with rationality. At best, rationality would be a multiplier to the benefit gained per hour of research; alternatively, it could be an upper bound to the total amount of benefit gained from researching.
If you could figure out what makes any particular exceptional person or group of exceptional people exceptional, and teach that to a large group of people, then those people would still be mediocre. After all, no more than 500 people at a time can ever be Fortune 500 CEOs.
Many people cannot distinguish between levels of indirection. To them, “I believe X” and “X” are the same thing, and therefore, reasons why it is beneficial to believe X are also reasons why X is true. I think this, rather than any sort of deliberate self deception, is what you have observed.
Warfare is just such a situation, and warlords are disproportionately represented in the human gene pool. The best representation of ancestral environment warfare I’ve seen is a real time strategy adaptation of Risk, where players receive resources proportional to their territory, and can only gain territory by taking it from others by force. I’ve played quite a few iterations of this, and the player who appears strongest almost never wins in the end; instead, the second-most powerful player wins.
Consider three warlords A, B, and C, starting out at peace, with A and B the same strength, and C significantly stronger. If A and B go to war with eachother, then C will conquer them both, so they won’t do that. If B and C go to war with eachother, then A must also go to war with C, or else he’ll find himself facing C plus the conquered remnants of A, with no possible allies, and conversely, if A goes to war with C then B must also go to war with C. In other words, if all players act rationally, then the only player who can’t win is the one who starts with the most resources.
When moderating comments, the goal is not to vote good posts up and bad posts down, but to make the vote total most accurately reflect the signals of all the people who voted on it. Since voters don’t gain or lose anything by voting accurately, besides the satisfaction of knowing that their votes help the scores more accurately reflect post quality, they should always vote according to their private signal, and ignore the signals that others have given.
On the other hand, when signaling is tied together with some other choice, then information cascades can happen. The example that was given in my networks class was a case of two restaurants next to each other, where each potential patron can see how busy each restaurant is. In that case, people don’t care about their signal, but just want to visit the better restaurant, and an information cascade is likely to occur. A similar occurrence happens with book purchases: if a book appears on a best-seller list, then that signals to everyone that it’s good, but it may only be there because people bought it based on that signal. There are documented examples of clever publishers have buying copies of their own books to kick-start this effect.
Reviewers are likely to have a hard time shooting down the work of anyone they know personally, and in specialized sciences, the probability that a paper was written by a friend or former colleague of the reviewer is high enough to be a problem. On the other hand, credibility does matter; if an unestablished author observes something strange, it’s likely that he’s made a mistake, but an old hand making the same observation should not be ignored. Perhaps instead of a name, reviewers should be given an abstract indication of the author’s credibility, such as the author’s faculty title, or the number of times they’ve published before.
This site is no place for mindless link propagation.
I agree; limit the posting rate. But wait a week or two before you do so. It may just be that a lot of people had ideas they wanted to write about, and took the opening of posting as their cue to do so. If that’s the case, then the post volume should die down on its own. I don’t want good articles to be rejected, but I don’t want posts appearing faster than I can read and digest them, either.
In theory, I should be able to decide what to read by setting a score threshold, and tuning it according to how much time I have to spend. Unfortunately, many sites have tried this and it doesn’t work in practice, because the posts with the most positive votes are older, and replies to older threads are read by fewer people and earn less karma. I’d rather have an editor tell me which threads are worth my time, so I can skip worthless threads and still join discussions on the worthwhile ones while they’re fresh.
That’s a lot of work to respond to an amateur’s argument with. Probably at least an order of magnitude more work than went into the original argument. And the formal argument is likely to end up being very different from the original, informal one; it would be very frustrating to take someone’s informal argument, formalize it, show that the formal version of the argument is incorrect, and then be told that your formalization missed some important insight.
If the arguments are chained together, then this is true, but the original poster was talking about independent lines of reasoning leading to the same conclusion. For arguments which are truly independent, then his formulation is correct.
The reason why saying “There is a God and He instilled...” is harder than saying “I believe that there is a God and He instilled...” is because the words “I believe that” are weasel words. The literal meaning of “I believe that” is irrelevant; any other weasel words would have the same effect. Consider the same sentence, but replace “I believe that” with “It is likely that”, or “Evidence indicates that”, or any similar phrase, and it’s just as easy.
Just because people are aware of a concept, and have words which ought to refer to that concept, does not mean that they consistently connect the two. The best example of this comes from the way people refer to things as [good] and [bad]. When people dislike something, but don’t know why, they generate exemplars of the concept “bad”, and call it evil, ugly, or stupid. This same mechanism lead to the widespread use of “gay” as a synonym for “bad”, and to racial slurs directed at anonymous online rivals who are probably the wrong race for the slur. I think that confidence markers are subject to the same linguistic phenomenon.
People think with sentences like “That’s a [good] car” or “[Weasel] God exists”. The linguistic parts of their mind expand them to “That’s a sweet car” and “I believe God exists” when speaking, and performs the inverse operation when listening. They don’t think about how the car tastes, and they don’t think about beliefs, even though literal interpretation of what they say would indicate that they do.
Can someone suggest a concise replacement for “in which direction” that applies here?
Expected future expectation is always the same as the current expectation.
When advertisements talk about percentage off, they’re providing two prices. The higher price is meant to anchor your judgment of the item’s value and your estimates of what other stores will charge, while the lower price is meant to seem cheap by comparison. However, the higher price is not required to be reasonable, and in fact, it usually isn’t; stores often mark items up to ridiculous prices just so they can bring them back down again with sales.
There are no ghosts, but there are things besides ghosts to be afraid of. Haunted houses are approximately equivalent to abandoned houses, and abandoned houses may contain human criminals, wild animals, infestations, and other unknown dangers. Ghosts are a psychological stand-in for risks that you aren’t individually aware of.
I recall a study (unfortunately I don’t have a citation handy) which looked into the environmental factors which produce unease and make people believe that areas are haunted. The main ones were low light, cold drafts, and unidentifiable sounds—all things which, at least in the ancestral environment, could indicate a place that is dangerous.
The Mistake Script
Edited to add “If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.” The case where you aren’t sure whether you agree was meant to be excuded by “If you are sure you do”, but wasn’t very clear. The case where you aren’t sure what the conclusion is wasn’t mentioned at all, and it’s an important one since many good articles take awhile to get to the point, or cover a broad range of points, and shouldn’t be aborted early.
Asking “Is it rational to X?” is a way of saying “I value rationality. Should I X?” The part of the title which is a question is equivalent, but the mention of rationality provides extra information about the author’s values. This would be clearer if the value statement and the question were split into separate sentences, but his meaning was clear enough.
I tried formalizing everything, ended up with a grotesque and incomplete flowchart, and decided to make the formalized procedure less precise, by hiding all that complexity behind the word “decide” in the last step. I believe the actual procedure which implements that process is hard wired, and is something like:
Generate reasons for and against an action, and a weight for each.
Compute the total weights of the reasons for and against
Compare the difference between the weights to a threshold. Compare the ratio between the weights to a different threshold. If both thresholds are met, decide in favor. If neither threshold is met, decide against. Otherwise go back to generating reasons.
The first step (generating reasons) is sort of like exemplar selection and sort of like memory lookup, and is therefore greatly influenced by priming certain concepts beforehand.
People don’t apply near thinking to fiction, especially to technical issues presented in fiction, because most fiction is full of fake detail: words that sound like descriptions if you skim over them, but are actually complete gibberish. This is especially true of science fiction, where many authors insert “technobabble”, which is created by taking words at random from outside the reader’s expected vocabulary.