I consider that a man’s brain originally is like a little empty attic, and you stock it with such furnature as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.
Is there some research corroborating this quote? I have a lot of useless knowledge but it doesn’t seem to stop me from accumulating useful knowledge. It does make sense to avoid spending time and energy on acquiring useless knowledge, though.
If this is a question about causality, I would assume not. Sherlock Holmes was eccentric to the point of insanity and made up all sorts of funny wrong things.
It’s less about making things up and more about then-current ideas that are now outdated.
Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before.
There are more of them in Holmes stories, like like the idea that you can tell a man’s intelligence from his skull shape/size (phrenology).
I have a lot of useless knowledge but that doesn’t seem to stop me from accumulating useful knowledge.
As I understand it (not that I can quote any research), knowledge helps gain more knowledge due to how memory works; it’s easier to remember something if you have previous ideas to which to “link” or associate the new ones (and those links don’t have to be within the same domain of knowledge). Also, wouldn’t it be true that the more things you understand, the more likely you are to have a shorter inferential distance to whatever new ideas you come across?
I had a different interpretation. To me, this sounded more like a warning against bad personal epistemic hygiene and about the tradeoff between epistemic and instrumental rationality, not what happens when you reach the upper bound of your memory capacity. Now that I think about it, your interpretation is probably closer to what Doyle had in mind (what with his 19th century pop-psychology and all).
However, he does demonstrate this knowledge later in the series and in fact turns out the be a well of useless facts later on though I don’t have the source for the inconsistency handy at the moment.
Some of us enjoy the challenge of finding rationalist ideas in unlikely places—or fitting ideas from non-rational sources into a rationalist framework. In this case, it seems fairly easy to do so. As Markus already points out, it is important to keep your mind from becoming infected with bad stuff.
Indeed it is. But the way you fight “memetic infection” in the real world is to take a look at the bad stuff and see where it goes wrong, not to isolate yourself from harmful ideas.
One could make an argument that, in the world of Warhammer 40K, keeping your mind barred and guarded is actually the most rational thing to do. Because if you do not, then instead of saying things like “only in death does duty end”, you’ll find yourself saying things like, “maim kill burn MAIM KILL BURN” and “Arrghbllgghhayargh NURGLE”. Only it wouldn’t be you saying those things, precisely, but a daemon that slipped into your unguarded mind and took up residence in your body.
It may be that xenophobia is a local optimum for humanity in 40K. But technology is explicitly mystical in that universe. Imagine how many fewer problems they would have with their enemies if their stuff all worked, and they had more of it.
It’s like bringing a 1000 pt army to a 500 pt skirmish. Every time.
Imagine how many fewer problems they would have with their enemies if their stuff all worked, and they had more of it.
IIRC that actually did happen a couple of times in that universe. The answers were usually “A Machine God eats the factory planet” and “Necrons”. So, the outcome was… not good.
On the other hand, the T’au have a pretty good handle on their tech, and they’re improving it all the time, so maybe the humans could take some lessons from them. On the third hand (*), the T’au as a whole seem to be immune to Chaos corruption, which is a luxury that the humans do not enjoy.
I’ll bite: how am I supposed to judge (or predict) the usefulness of facts when I first see them, in time to avoid storing the useless ones?
I think the closest we get to this is that every time we remember something, we also edit that memory, thus (if we are rational enough) tossing out the useless or unreliable parts or at least flagging them as such. If this faculty worked better I might find it a convincing argument for “intelligent design,” but the real thing, like so much else in human beings, is so haphazard that it reinforces my lack of belief in that idea.
I don’t think one necessarily edits the memory. Memories intrinsically decay over time; each recall is associated with a greater chance of being able to recall it in the future (memorization), with bonuses to spaced out recollections (spaced repetition) and optional userland hinting to the OS (going to sleep while expecting to be tested on something leads to greater retention for the same number of reviews).
In other words, the brain is a cache that implements Least Recently Used eviction.
Why would you expect intelligent design to explain that very much better than evolution?
I think the reasoning is more along the lines that intelligent design is worse at explaining haphazard mush than it is at explaining well ordered things. As such an observation of well ordered things will result in a high weighting for intelligent design than an observation of haphazard mush in the same place simply because it must be discounted far less in the former case.
Right, but that’s only half the story… I wouldn’t say it’s zero evidence, but “convincing argument” seems far flung when there’s plenty of reason for evolution to select for better use of our brain meats.
-Sherlock Holmes, A Study in Scarlet
Is there some research corroborating this quote? I have a lot of useless knowledge but it doesn’t seem to stop me from accumulating useful knowledge. It does make sense to avoid spending time and energy on acquiring useless knowledge, though.
If this is a question about causality, I would assume not. Sherlock Holmes was eccentric to the point of insanity and made up all sorts of funny wrong things.
In reality, it seems like in general exercising the brain improves its function on several dimensions. Also, relevant silly article about brain memory capacity
It’s less about making things up and more about then-current ideas that are now outdated.
There are more of them in Holmes stories, like like the idea that you can tell a man’s intelligence from his skull shape/size (phrenology).
As I understand it (not that I can quote any research), knowledge helps gain more knowledge due to how memory works; it’s easier to remember something if you have previous ideas to which to “link” or associate the new ones (and those links don’t have to be within the same domain of knowledge). Also, wouldn’t it be true that the more things you understand, the more likely you are to have a shorter inferential distance to whatever new ideas you come across?
I had a different interpretation. To me, this sounded more like a warning against bad personal epistemic hygiene and about the tradeoff between epistemic and instrumental rationality, not what happens when you reach the upper bound of your memory capacity. Now that I think about it, your interpretation is probably closer to what Doyle had in mind (what with his 19th century pop-psychology and all).
In the book this quote is in, Holmes uses it to justify refusing to remember that the Earth goes around the Sun.
However, he does demonstrate this knowledge later in the series and in fact turns out the be a well of useless facts later on though I don’t have the source for the inconsistency handy at the moment.
I read this as concerning organization instead of capacity.
relevant: Your inner Google
Reminds me of some Warhammer 40,000 quotes:
Always liked that last one. There are memes out there I’d rather not get infected with.
Though don’t listen to me; I find it impossible not to like anything said by Isador Akios.
Really? The last quote seems expressly anti-rationality. Especially considering the source.
Some of us enjoy the challenge of finding rationalist ideas in unlikely places—or fitting ideas from non-rational sources into a rationalist framework. In this case, it seems fairly easy to do so. As Markus already points out, it is important to keep your mind from becoming infected with bad stuff.
Indeed it is. But the way you fight “memetic infection” in the real world is to take a look at the bad stuff and see where it goes wrong, not to isolate yourself from harmful ideas.
Yes. In this metaphor, the guard at the gates takes a look at the bad stuff and decides against letting it into the fortress.
One could make an argument that, in the world of Warhammer 40K, keeping your mind barred and guarded is actually the most rational thing to do. Because if you do not, then instead of saying things like “only in death does duty end”, you’ll find yourself saying things like, “maim kill burn MAIM KILL BURN” and “Arrghbllgghhayargh NURGLE”. Only it wouldn’t be you saying those things, precisely, but a daemon that slipped into your unguarded mind and took up residence in your body.
It may be that xenophobia is a local optimum for humanity in 40K. But technology is explicitly mystical in that universe. Imagine how many fewer problems they would have with their enemies if their stuff all worked, and they had more of it.
It’s like bringing a 1000 pt army to a 500 pt skirmish. Every time.
IIRC that actually did happen a couple of times in that universe. The answers were usually “A Machine God eats the factory planet” and “Necrons”. So, the outcome was… not good.
On the other hand, the T’au have a pretty good handle on their tech, and they’re improving it all the time, so maybe the humans could take some lessons from them. On the third hand (*), the T’au as a whole seem to be immune to Chaos corruption, which is a luxury that the humans do not enjoy.
(*) Or tail or tentacle or what have you.
Mechadendrite, thank you very much.
I’ll bite: how am I supposed to judge (or predict) the usefulness of facts when I first see them, in time to avoid storing the useless ones?
I think the closest we get to this is that every time we remember something, we also edit that memory, thus (if we are rational enough) tossing out the useless or unreliable parts or at least flagging them as such. If this faculty worked better I might find it a convincing argument for “intelligent design,” but the real thing, like so much else in human beings, is so haphazard that it reinforces my lack of belief in that idea.
I don’t think one necessarily edits the memory. Memories intrinsically decay over time; each recall is associated with a greater chance of being able to recall it in the future (memorization), with bonuses to spaced out recollections (spaced repetition) and optional userland hinting to the OS (going to sleep while expecting to be tested on something leads to greater retention for the same number of reviews).
In other words, the brain is a cache that implements Least Recently Used eviction.
Why would you expect intelligent design to explain that very much better than evolution?
I think the reasoning is more along the lines that intelligent design is worse at explaining haphazard mush than it is at explaining well ordered things. As such an observation of well ordered things will result in a high weighting for intelligent design than an observation of haphazard mush in the same place simply because it must be discounted far less in the former case.
Right, but that’s only half the story… I wouldn’t say it’s zero evidence, but “convincing argument” seems far flung when there’s plenty of reason for evolution to select for better use of our brain meats.
Anyway, W40k aside. Isn’t this actually pretty bad advice based on outdated ideas about how the brain works?
I don’t think this actually happens.