Honestly, I feel like if Eliezer had left out any mention of the math of Bayes’ Theorem from the sequences, I would be no worse off. The seven statements you wrote seem fairly self-evident by themselves. I don’t feel like I need to read that P(A|B) > P(A) or whatever to internalize them. (But perhaps certain people are highly mathematical thinkers for whom the formal epistemology really helps?)
Lately I kind of feel like rationality essentially comes down to two things:
Recognizing that as a rule you are better off believing the truth, i.e. abiding by the Litany of Tarski.
Having probabilistic beliefs, i.e. abiding by the Bayesian epistemology and not the Aristotelian or the Anton-Wilsonian as Yvain defined in his reaction to Chapman, or having an many-color view as opposed to a two-color view or a one-color view as Eliezer defined in the Fallacy of Gray.
Once you’ve internalized these two things, you’ve learned this particular Secret of the Universe. I’ve noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious. (Although apparently CFAR workshops are really helpful, so if that’s true that’s evidence against this model.)
Honestly, I feel like if Eliezer had left out any mention of the math of Bayes’ Theorem from the sequences, I would be no worse off. The seven statements you wrote seem fairly self-evident by themselves.
It’s a bit like learning thermodynamics. It may seem self-evident that things have temperatures, that you can’t get energy from nowhere, and that the more you put things together, the more they fall apart, but the science of thermodynamics puts these intuitively plausible things on a solid foundation (being respectively the zeroth, first, and second laws of thermodynamics). That foundation is itself built on lower-level physics. If you do not know why perpetual motion machines are ruled out, but just have an unexplained intuition that they can’t work, you will not have a solid ground for judging someone’s claim to have invented one.
The Bayesian process of updating beliefs from evidence by Bayes theorem is the foundation that underlies all of these “obvious” statements, and enables one to see why they are true.
Yes, who knows how many other ‘obvious’ statements you might believe otherwise, such as “Falsification is a different type of process from confirmation.”
Falsifying X is obviously the same as confirming not-X … but confirming that the culprit was Mortimer Q. Snodgrass is quantitatively very different from confirming that the culprit was not Mortimer Q. Snodgrass, and like someone once said, a qualitative difference is just a quantitative difference that is large enough.
I’ve noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious.
I saw Yvain describe this experience. My experience was actually kind of the opposite. When I read the sequences, they seemed extremely well written, but obvious. I thought that my enjoyment of them was the enjoyment of reading what I already knew, but expressed better than I could express it, plus the cool results from the heuristics-and-biases research program. It was only in retrospect that I noticed how much they had clarified my thinking about basic epistemology.
That’s very interesting that your experience was the opposite.
And yeah, I saw where Yvain wrote that he and a friend shared that experience, and I noticed that I shared it exactly as well. It also seems to match with attitudes I had seen around, so I feel like it could be fairly general.
Honestly, I feel like if Eliezer had left out any mention of the math of Bayes’ Theorem from the sequences, I would be no worse off. The seven statements you wrote seem fairly self-evident by themselves. I don’t feel like I need to read that P(A|B) > P(A) or whatever to internalize them.
For me, reading the first chapter of Probability Theory by Jaynes showed me that what thus far had only been a vague intuition of mine (that neither what Yvain calls Aristotelianism nor what Yvain calls Anton-Wilsonism were the full story) actually had a rigorous quantitative form that can be derived mathematically from a few entirely reasonable desiderata, which did put it on a much more solid ground in my mind.
The seven statements you wrote seem fairly self-evident by themselves. I don’t feel like I need to read that P(A|B) > P(A) or whatever to internalize them.
I’ve noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious.
The math definitely very much helped me understand the concepts. I’ve found myself sitting down and explicitly working out the probability calculations when reading some posts in the Sequences (and other posts on LW). (I guess I count as a “highly mathematical thinker”?)
Honestly, I feel like if Eliezer had left out any mention of the math of Bayes’ Theorem from the sequences, I would be no worse off. The seven statements you wrote seem fairly self-evident by themselves. I don’t feel like I need to read that P(A|B) > P(A) or whatever to internalize them. (But perhaps certain people are highly mathematical thinkers for whom the formal epistemology really helps?)
Lately I kind of feel like rationality essentially comes down to two things:
Recognizing that as a rule you are better off believing the truth, i.e. abiding by the Litany of Tarski.
Having probabilistic beliefs, i.e. abiding by the Bayesian epistemology and not the Aristotelian or the Anton-Wilsonian as Yvain defined in his reaction to Chapman, or having an many-color view as opposed to a two-color view or a one-color view as Eliezer defined in the Fallacy of Gray.
Once you’ve internalized these two things, you’ve learned this particular Secret of the Universe. I’ve noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious. (Although apparently CFAR workshops are really helpful, so if that’s true that’s evidence against this model.)
It’s a bit like learning thermodynamics. It may seem self-evident that things have temperatures, that you can’t get energy from nowhere, and that the more you put things together, the more they fall apart, but the science of thermodynamics puts these intuitively plausible things on a solid foundation (being respectively the zeroth, first, and second laws of thermodynamics). That foundation is itself built on lower-level physics. If you do not know why perpetual motion machines are ruled out, but just have an unexplained intuition that they can’t work, you will not have a solid ground for judging someone’s claim to have invented one.
The Bayesian process of updating beliefs from evidence by Bayes theorem is the foundation that underlies all of these “obvious” statements, and enables one to see why they are true.
Yes, who knows how many other ‘obvious’ statements you might believe otherwise, such as “Falsification is a different type of process from confirmation.”
Falsifying X is obviously the same as confirming not-X … but confirming that the culprit was Mortimer Q. Snodgrass is quantitatively very different from confirming that the culprit was not Mortimer Q. Snodgrass, and like someone once said, a qualitative difference is just a quantitative difference that is large enough.
I saw Yvain describe this experience. My experience was actually kind of the opposite. When I read the sequences, they seemed extremely well written, but obvious. I thought that my enjoyment of them was the enjoyment of reading what I already knew, but expressed better than I could express it, plus the cool results from the heuristics-and-biases research program. It was only in retrospect that I noticed how much they had clarified my thinking about basic epistemology.
That’s very interesting that your experience was the opposite.
And yeah, I saw where Yvain wrote that he and a friend shared that experience, and I noticed that I shared it exactly as well. It also seems to match with attitudes I had seen around, so I feel like it could be fairly general.
For me, reading the first chapter of Probability Theory by Jaynes showed me that what thus far had only been a vague intuition of mine (that neither what Yvain calls Aristotelianism nor what Yvain calls Anton-Wilsonism were the full story) actually had a rigorous quantitative form that can be derived mathematically from a few entirely reasonable desiderata, which did put it on a much more solid ground in my mind.
Really? Even the fifth one ;) ?
What happens when they reach this post?
The math definitely very much helped me understand the concepts. I’ve found myself sitting down and explicitly working out the probability calculations when reading some posts in the Sequences (and other posts on LW). (I guess I count as a “highly mathematical thinker”?)