Rationality-related things I don’t know as of 2023

One of the blog posts I’m most fond of is Things I Don’t Know as of 2018. It’s by Dan Abramov, one of the more prominent people in the world of front-end web development. He goes through a bunch of relatively basic programming-related things that he doesn’t understand, like unix commands and low-level languages.

I’d like to do something similar, but for rationality-related things. Why?

  • For fun.

  • To normalize the idea that no one’s perfect.

  • It’ll make it easier to address these knowledge gaps. Or maybe just more likely that I actualy do so.

Here’s the list:[1]

  • Simulcra. I spend some time going through the posts and it’s one of those things that just never manages to click with me.

  • Blockchain. I guess the thing that I don’t understand here is the hype. I get that it’s a basically a database that can’t be editted and I’ve read through articles talking about the use cases, but it’s been around for a while now and doesn’t seem to have been that game changing. Yet there are smart people who are super excited about it and I suspect that there are things I am failing to appreciate, regardless of whether their excitement is justified.

  • Morality. To me it seems like rationality can tell you how to achieve your goals but not what (terminal) goals to pick. Arguments that try to tell you what terminal goals to pick have just never made sense to me. Maybe there’s something I’m missing though.

  • Quantum physics. I skipped/​lightly skimmed the sequence posts on this. Seemed high effort and not particularly important. Well, it is cool to understand how reality works at the most fundamental level. Hm. I would be interested in a going through some sort of lower effort bigger picture material on quantum physics. I spent some time messing around with that sort of stuff like 13 years ago but all that stuck is some vague notion that reality is (fundamentally?) probabilistic and weird.

  • Evolution. I get that at a micro-level, if something makes an organism more likely to reproduce it will in fact, err, spread the genes. And then that happens again and again and again. And since mutations are a thing organisms basically get to try new stuff out and the stuff that works sticks. I guess that’s probably the big idea but I don’t know much beyond it and remember being confused when I initially skimmed through the The Simple Math of Evolution sequence.

  • Evolutionary psychology. I hear people make arguments like “X was important to our hunter-gatherer ancestors and so we still find ourselves motivated by it/​to do it today because evolution is slow”. X might be consuming calories when available, for example. There’s gotta be more to evolutionary psychology than that sort of reasoning, but I don’t know what the “more” is.

  • Bayes math. I actually think I have a pretty good understanding of the big picture ideas. I wouldn’t be able crunch numbers or do things that they teach you in a stats course though.[2] Nor do I understand the stuff about log odds and bits of evidence. I’d have to really sit down, think hard about it, and spend some time practicing using it.

  • Solomonoff induction. I never took the time to understand it or related ideas.

  • Occam’s razor. Is it saying anything other than P(A) >= P(A & B)?[3]

  • Moloch. I enjoyed Meditations on Moloch and found it to be thought provoking. I’m not sure that I really understand what Moloch actually is/​represents though. I struggle a little with the abstractness of it.

  • Double crux. This is another one of those “maybe I actually understand it but it feels like there’s something I’m missing” things. I get that a crux is something that would change your mind. And yeah, if you’re arguing with someone and you find a crux that would make you agree with them if vice versa and stuff, that’s useful. Then you guys can work on discussing that crux. Is that it though? Isn’t that common sense? Why is this presented as something that CFAR discovered? Maybe there’s more to it than I’m describing?

  • Turing machines. Off the top of my head I don’t really know what they are. Something about a roll of tape with numbers and skipping from one place to the next and how that is somehow at the core of all computing? I wish I understood this. After all, I am a programmer. I spent a few weeks skimming through a Udacity coures on the theory of computation a while ago but none of it really stuck.

If anyone wants to play the role of teacher in the comments I’d love to play the role of student.


  1. ↩︎

    To construct it I skimmed through the table of contents for Rationality from AI to Zombies, the top posts of all time, the tags page, and also included some other stuff that came to my mind.

  2. ↩︎

    But I would like to. I tried skimming through a couple of textbooks (Doing Bayesian Data Analysis by Kruschke and Bayesian Data Anaysis by Gelman) and found them to be horribly written. If anyone has any recommendations let me know.

  3. ↩︎

    Well, > instead of >= since 0 and 1 are not probabilities but maybe in some contexts it makes sense to treat things as having a probability of 0 or 1.