Yes. The primary historical change we see here is that “rational knowledge” used to mean certain, “geometrical” knowledge that leaves no room for probabilities: like the Pythagoras theorem, the “truths of logic” that are true in every conceivable universe and that are seen by the infallible mind not with the fallible eyes. This is of course totally crazy, but it was a fairly popular thing from Plato to Descartes. Probabilistic knowledge was called common sense, empiricism, pragmatism, phronesis, prudentia / prudence.
Much of the history of Western thought is a contest between these two:
Infallibilists: Plato, Neo-Platonists, Descartes partially and his students (Lausanne Logicians) much more, partially positivism, various political-ideological recipes, from the left wing to Mises.org
Probabilists: Aristotle, Scholastics/Thomists, Vico (against Descartes), Pascal, Edmund Burke, Oakeshott, Peirce, generally moderate to moderate-conservative politicial positions, roughly saying “the world is messy and chaotic, we need knowledge inferred from experience, not lofty principles”
The way this was inverted here is that there are a bunch of economists and mathemathicians, statisticians who defined the figuring out of the most accurate probabilities as rational decision-making, and Eliezer imported the terminology into the mainstream and philosophy, so to speak. If you talk to a philosopher, you better call LW-Rationalism Pragmatism, as it is mainly about being a Peirce who can do (Bayesian) math.
Again, this is not necessarily a problem. I think Eliezer wanted to direct attention to the mathemathical-statistical aspect of it, that is why he used that kind of terminology instead of the philosophical one. Since Eliezer was interested in AI even before he started the sequences, it sort of makes sense that if the goal is to build a Friendly Bayesian AI it will run on math more than on philosophy. So from that angle this change is okay, just expect the terminology to break down when compared to mainstream philosophy.
I suspect it goes like this:
there is a word, which means something useful;
some group starts using it as their applause light;
they are doing it wrong, but they don’t notice it, because even their wrong version is still an applause light, therefore it must be good;
other people will also start using the word to mean “what this group is doing”, that is, the wrong version.
Yes. The primary historical change we see here is that “rational knowledge” used to mean certain, “geometrical” knowledge that leaves no room for probabilities: like the Pythagoras theorem, the “truths of logic” that are true in every conceivable universe and that are seen by the infallible mind not with the fallible eyes. This is of course totally crazy, but it was a fairly popular thing from Plato to Descartes. Probabilistic knowledge was called common sense, empiricism, pragmatism, phronesis, prudentia / prudence.
Much of the history of Western thought is a contest between these two:
Infallibilists: Plato, Neo-Platonists, Descartes partially and his students (Lausanne Logicians) much more, partially positivism, various political-ideological recipes, from the left wing to Mises.org
Probabilists: Aristotle, Scholastics/Thomists, Vico (against Descartes), Pascal, Edmund Burke, Oakeshott, Peirce, generally moderate to moderate-conservative politicial positions, roughly saying “the world is messy and chaotic, we need knowledge inferred from experience, not lofty principles”
The way this was inverted here is that there are a bunch of economists and mathemathicians, statisticians who defined the figuring out of the most accurate probabilities as rational decision-making, and Eliezer imported the terminology into the mainstream and philosophy, so to speak. If you talk to a philosopher, you better call LW-Rationalism Pragmatism, as it is mainly about being a Peirce who can do (Bayesian) math.
Again, this is not necessarily a problem. I think Eliezer wanted to direct attention to the mathemathical-statistical aspect of it, that is why he used that kind of terminology instead of the philosophical one. Since Eliezer was interested in AI even before he started the sequences, it sort of makes sense that if the goal is to build a Friendly Bayesian AI it will run on math more than on philosophy. So from that angle this change is okay, just expect the terminology to break down when compared to mainstream philosophy.
Apparently there’s a reasonable case to be made for Smith’s awareness of interval probabilities or so Michael emmet brady tells me.