(This comment is specifically re: Bayes being core to Yudkowskian rationality.)
From the OP:
Bayesian methods are a core feature of Eliezer Yudkowsky’s version of rationality. You might even say that Eliezer’s variant could be called “Bayesian Rationality”. It’s not a ‘technique’ or a ‘tool’, to Eliezer Bayes is the law, the irrefutable standard that provides a precise unchanging figure for exactly how much you should update in response to a new piece of evidence. Bayes shows you that there is in fact a right answer to this question, and you’re almost certainly getting it wrong.
And from your comment:
I think section is basically wrong … On Eliezer’s philosophy, take this bit from Twelve Virtues of Rationality:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
Now one can’t simultaneously define “rationality” as the winning Way, and define “rationality” as Bayesian probability theory and decision theory. But it is the argument that I am putting forth, and the moral of my advice to trust in Bayes, that the laws governing winning have indeed proven to be math. If it ever turns out that Bayes fails—receives systematically lower rewards on some problem, relative to a superior alternative, in virtue of its mere decisions—then Bayes has to go out the window. “Rationality” is just the label I use for my beliefs about the winning Way—the Way of the agent smiling from on top of the giant heap of utility. Currently, that label refers to Bayescraft.
Well, and did we ever “discover [our] mistake”? Is that a thing that happened? Did Bayes fail, and consequently “go out the window”? Did the label of “rationality” ever get reassigned, to something other than “Bayescraft”? When did any of this take place?
As far as I can tell, nothing like that ever happened. So:
Clearly, Bayes’s theorem, and Bayesian reasoning, were absolutely as central to Eliezer’s account of rationality as The Last Rationalist [henceforth, “TLR”] claims. Eliezer calls it “the laws governing winning”. He says that “rationality” is a label that “refers to Bayescraft”. (Need I dig up more quotes about how Bayes is the law and any deviation from it is guaranteed-incorrect? I can, easily; just give the word.) Saying that the quoted bit of the OP is “basically wrong” seems totally unjustifiable to me.
There has never been any retraction, crisis of faith, renunciation, “halt and catch fire”, or any other reversal like that. Or, if there has been, it’s not widely known. (Of course, who knows what Eliezer may have said in some lost, un-google-able Facebook post? Maybe it’s in there somewhere…)
TLR overreaches when he calls Bayesian reasoning (according to Eliezer) “irrefutable”. You can excuse that as a figure of speech, or you can mark the OP down for an unsupportable claim; but either way, other than this one word, TLR’s description of Bayesian reasoning’s place in Eliezer’s rationality seems to be spot-on.
Well, and did we ever “discover [our] mistake”? Is that a thing that happened? Did Bayes fail, and consequently “go out the window”? Did the label of “rationality” ever get reassigned, to something other than “Bayescraft”? When did any of this take place?
This is sort of hard to answer, because I want to be clear that I don’t think Bayes-as-fact or Bayes-as-lens failed; the thing that I think changed is Bayes-as-growth-edge went from likely to unlikely. This is the thing you would expect if Bayes is less rich and complicated than ‘the whole universe’; eventually you grok it and your ‘growth edge’ of mistakes to correct moves somewhere else, with the lens of Bayes following you there.
I also think it’s important that Eliezer mostly doesn’t work on rationality things anymore, most of the technically minded rationalists I know have their noses to the grindstone (in one sense or another), and the continued development of rationality is mostly done by orgs like CFAR and individuals like Scott Alexander, and I don’t think they’ve found the direct reference to Bayes to be particularly useful for any of their goals (while I do think they’ve found habits of mind inspired by Bayes to be useful, as discussed in the parallel branch).
Edit: I do think it’s somewhat surprising that the best pedagogical path CFAR has found doesn’t route through Bayes, but the right thing to do here seems to be to update on evidence. ;)
(This comment is specifically re: Bayes being core to Yudkowskian rationality.)
From the OP:
And from your comment:
But Eliezer also said this:
Well, and did we ever “discover [our] mistake”? Is that a thing that happened? Did Bayes fail, and consequently “go out the window”? Did the label of “rationality” ever get reassigned, to something other than “Bayescraft”? When did any of this take place?
As far as I can tell, nothing like that ever happened. So:
Clearly, Bayes’s theorem, and Bayesian reasoning, were absolutely as central to Eliezer’s account of rationality as The Last Rationalist [henceforth, “TLR”] claims. Eliezer calls it “the laws governing winning”. He says that “rationality” is a label that “refers to Bayescraft”. (Need I dig up more quotes about how Bayes is the law and any deviation from it is guaranteed-incorrect? I can, easily; just give the word.) Saying that the quoted bit of the OP is “basically wrong” seems totally unjustifiable to me.
There has never been any retraction, crisis of faith, renunciation, “halt and catch fire”, or any other reversal like that. Or, if there has been, it’s not widely known. (Of course, who knows what Eliezer may have said in some lost, un-google-able Facebook post? Maybe it’s in there somewhere…)
TLR overreaches when he calls Bayesian reasoning (according to Eliezer) “irrefutable”. You can excuse that as a figure of speech, or you can mark the OP down for an unsupportable claim; but either way, other than this one word, TLR’s description of Bayesian reasoning’s place in Eliezer’s rationality seems to be spot-on.
This is sort of hard to answer, because I want to be clear that I don’t think Bayes-as-fact or Bayes-as-lens failed; the thing that I think changed is Bayes-as-growth-edge went from likely to unlikely. This is the thing you would expect if Bayes is less rich and complicated than ‘the whole universe’; eventually you grok it and your ‘growth edge’ of mistakes to correct moves somewhere else, with the lens of Bayes following you there.
I also think it’s important that Eliezer mostly doesn’t work on rationality things anymore, most of the technically minded rationalists I know have their noses to the grindstone (in one sense or another), and the continued development of rationality is mostly done by orgs like CFAR and individuals like Scott Alexander, and I don’t think they’ve found the direct reference to Bayes to be particularly useful for any of their goals (while I do think they’ve found habits of mind inspired by Bayes to be useful, as discussed in the parallel branch).
Edit: I do think it’s somewhat surprising that the best pedagogical path CFAR has found doesn’t route through Bayes, but the right thing to do here seems to be to update on evidence. ;)