I’ve been thinking a bunch about practical epistemology lately. Not “how would ideal reasoners with infinite compute update their beliefs?” but “what actually works for humans trying to figure out true things about the world?”
This led me to create an explicit tier list ranking different ways of knowing, from S+ tier (literacy) down to F—tier (arguing on Twitter). The core argument is that we already implicitly rank epistemic methods in our daily lives; we just pretend we don’t when we start theorizing about knowledge.
Some potentially controversial claims I make:
Bayesianism and other frameworks are C-tier (useful but not primary)
Thought experiments are D-tier (vastly overrated by philosophers and our local community)
Cultural evolution is F-tier (Henrich is interesting but compare manioc processing to penicillin)
Most actual knowledge acquisition happens through S-tier methods (literacy/math) and B-tier methods (empiricism/engineering), not through explicit reasoning frameworks
The specific tiers matter less than the principles behind trying to create the tier list in the first place. My hope is that by making the implicit hierarchies that people already all have explicit, we can get past tired arguments of pluralism (epistemic democracy) or monism (one-size-fits-all styles of One True Frameworks like Popperianism or Bayesianism), and have a smoother transition between questions of practical ways of knowing and formal epistemology.
I reposted this here because the LessWrong community has influenced my explicit thoughts on epistemology more than probably any other community. Many things here has been enlightening to read, and helped me think more clearly about my own thoughts and my place in it.
I’ve always thought that the explicit frameworks for epistemology used in this community have been great but subtly off in some key ways. In one sense, I hope this article can serve as a rejoinder/footnote/addendum to Yudkowsky’s Sequences and Ozy’s recent article on Rationalist Epistemics.
In another sense, it’s a love letter to the community that has given so much to me.
I’d be keen to see some feedback! Particularly interested in:
Which methods you think I’m overrating/underrating
Important methods (whether because they’re good, or because they’re popular) I’ve left off the tier list, and where you’d place it
What you think of the different principles for building a mental toolkit
How you’d adapt this for specific domains like AI safety research
Eg, in what ways is practical epistemology for AIs importantly different than for humans?
In which ways you think this overall framework is importantly or subtly off
Future elaborations/extensions you’d like to see in this style
Link here: https://linch.substack.com/p/which-ways-of-knowing-actually-work
Mimicry is at least A-tier. Every person relies purely on mimicry for the first years of their life to learn the most important behaviors for them in the world. For most skills, at most times, an hour spent trying to mimic an expert is going to pay off more than an hour spent reading or reasoning.
Yeah that’s a reasonable perspective. I think my issue is just that many/enough people mimic the wrong things/people, and there isn’t enough self-correction, so it’s hard to go up higher in tiers as a result.
You could say the same for reading for sure. I think mimicry is more reliable. Actions speak louder then words. The only issue is that you often don’t have good access to someone successful to mimic for a complex behavior. But if you do have access, then you should mimic them more than you pay attention to what they say or write.