Against Epistemic Democracy: A Epistemic Tier List of What Actually Works

Link post

I’ve been thinking a bunch about practical epistemology lately. Not “how would ideal reasoners with infinite compute update their beliefs?” but “what actually works for humans trying to figure out true things about the world?”

This led me to create an explicit tier list ranking different ways of knowing, from S+ tier (literacy) down to F—tier (arguing on Twitter). The core argument is that we already implicitly rank epistemic methods in our daily lives; we just pretend we don’t when we start theorizing about knowledge.

Some potentially controversial claims I make:

  • Bayesianism and other frameworks are C-tier (useful but not primary)

  • Thought experiments are D-tier (vastly overrated by philosophers and our local community)

  • Cultural evolution is F-tier (Henrich is interesting but compare manioc processing to penicillin)

  • Most actual knowledge acquisition happens through S-tier methods (literacy/​math) and B-tier methods (empiricism/​engineering), not through explicit reasoning frameworks

The specific tiers matter less than the principles behind trying to create the tier list in the first place. My hope is that by making the implicit hierarchies that people already all have explicit, we can get past tired arguments of pluralism (epistemic democracy) or monism (one-size-fits-all styles of One True Frameworks like Popperianism or Bayesianism), and have a smoother transition between questions of practical ways of knowing and formal epistemology.

I reposted this here because the LessWrong community has influenced my explicit thoughts on epistemology more than probably any other community. Many things here has been enlightening to read, and helped me think more clearly about my own thoughts and my place in it.

I’ve always thought that the explicit frameworks for epistemology used in this community have been great but subtly off in some key ways. In one sense, I hope this article can serve as a rejoinder/​footnote/​addendum to Yudkowsky’s Sequences and Ozy’s recent article on Rationalist Epistemics.

In another sense, it’s a love letter to the community that has given so much to me.

I’d be keen to see some feedback! Particularly interested in:

  • Which methods you think I’m overrating/​underrating

  • Important methods (whether because they’re good, or because they’re popular) I’ve left off the tier list, and where you’d place it

  • What you think of the different principles for building a mental toolkit

  • How you’d adapt this for specific domains like AI safety research

    • Eg, in what ways is practical epistemology for AIs importantly different than for humans?

  • In which ways you think this overall framework is importantly or subtly off

  • Future elaborations/​extensions you’d like to see in this style

Link here: https://​​linch.substack.com/​​p/​​which-ways-of-knowing-actually-work