Terminological point: I don’t think you can properly describe your hypothetical rationalist in Stalinist Russia as “paranoid”. His belief that he is surrounded by what amounts to a conspiracy out to subjugate and destroy him is neither fixated nor delusional; it is quite correct, even if many of the conspiracy’s members would choose to defect from it if they believed they could do so without endangering themselves.
I also note that my experience of living in the US since around 2014 has been quite similar in kind, if not yet in degree. I pick out 2014 because of the rage-mobbing of Brendan Eich; that was the the point at which “social justice” began presenting to me as an overtly serious threat to free speech. Six years later, political censorship and the threat from cancel culture have escalated to the point where, while we may not yet have achieved Soviet levels of repression, we’re closing in fast on East Germany’s.
Endorsed. A lot of this article is strongly similar to an unfinished draft of mine about how to achieve breakthroughs on unsolved problems.
I’m not ready to publish the entire draft yet, but I will add one effective heuristic. When tackling an unsolved problem, try to model how other people are likely to have attacked it and then avoid those approaches. If they worked, someone else would probably have achieved success with them before you came along.
To be fair, I haven’t followed Less Wrong all that closely over the years. It’s more that I’ve known some of the key people for a while, notably Eliezer himself and Scott Alexander.
It seems to me that you’ve been taking your model of predictivism from people who need to read some Kripke. In Peirce’s predictivism, to assert that a statement is meaningful is precisely to assert that you have a truth condition for it, but that doesn’t mean you necessarily have the capability to test the condition.
Consider Russell’s teapot. “A teapot orbits between Earth and Mars” is a truth claim that must unambiguously have a true or false value. There is a truth condition on on it; if you build sufficiently powerful telescopes and perform a whole-sky survey you will find it. It would be entirely silly to claim that the claim is meaningless because the telescopes don’t exist.
The claim “Galaxies continue to exist when they exit our light-cone” has exactly the same status. The fact that you happen to to believe the right sort of telescope not only does not exist but cannot exist is irrelevant—you could after all be mistaken in believing that sort of observation is impossible. I think it is quite likely you are mistaken, as nonlocal realism seems the most likely escape from the bind Bell’s inequalities put us in.
MWI presents a a subtler problem, not like Russell’s Teapot, because we haven’t the faintest idea what observing another quantum world would be like. In the case of the overly-distant galaxies, I can sketch a test condition for the claim that involves taking a superluminal jaunt 13 billion light-years thataway and checking all around me to see if the distribution of galaxies has a huge NOT THERE on the side away from Earth. I think a predictivist would be right to ask that you supply an analogous counterfactual before the claim “other quantum worlds exist” can be said to have a meaning.
Eliezer was more influenced by probability theory, I by analytic philosophy, yes. These variations are to be expected. I’m reading Jaynes now and finding him quite wonderful. I was a mathematician at one time, so that book is almost comfort food for me—part of the fun is running across old friends expressed in his slightly eccentric language.
I already had a pretty firm grasp on Feynman’s “first-principles approach to reasoning” by the time I read his autobiographical stuff. So I enjoyed the books a lot, but more along the lines of “Great physicist and I think alike! Cool!” than being influenced by him. If I’d been able to read them 15 years earlier I probably would have been influenced.
One of the reasons I chose a personal, heavily narratized mode to write the essay in was exactly so I could use that to organize what would otherwise have been a dry and forbidding mass of detail. Glad to know that worked—and, from what you don’t say, that I appear to have avoided the common “it’s all about my feelings” failure mode of such writing.
I have run across Bucky Fuller of course. Often brilliant, occasionally cranky, geodesic domes turned out to suck because you can’t seal all those joints well enough. We could use more like him.
Great Mambo Chicken and Engines of Creation were in my reference list for a while, until I decided to cull the list for more direct relevance to systems of training for rationality. It was threatening to get unmanageably long otherwise.
I didn’t know there was a biography of Korzybski. Thanks!
“Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us” trivially unpacks to “If we had methods to make observations outside our light cone, we would pick up the signatures that galaxies after the expanding universe has carried them over the horizon of observation from us defined by c.”
You say “Any meaningful belief has a truth-condition”. This is exactly Peirce’s 1878 insight about the meaning of truth claims, expressed in slightly different language—after all, your “truth-condition” unpacks to a bundle of observables, does it not?
The standard term of art you are missing when you say “verificationist” is “predictivist”.
I can grasp no way in which you are not a predictivist other than terminological quibbles, Eliezer. You can refute me by uttering a claim that you consider meaningful, e.g. having a “truth-condition”, where the truth condition does not implicitly cash out as hypothetical-future observables—or, in your personal terminology, “anticipated experiences”
Amusingly, your “anticipated experiences” terminology is actually closer to the language of Peirce 1878 than the way I would normally express it, which is influenced by later philosophers in the predictivist line, notably Reichenbach.
The reference to the Book of the Law was intentional. The reference to chaos magic was not, as that concept had yet to be formulated when I wrote the essay—at least, not out where I could see it.
I myself do not use psychoactives for magical purposes; I’ve never found it necessary and consider them a rather blunt and chancy instrument. I do occasionally take armodafinil for the nootropic effect, but that is very recent and long postdates the essay.
Probably, but there is something else more subtle.
Both the cultures you’re pointing at are, essentially, engines to support achieving right mindset. It’s not quite the same right mindset, but in either case you have to detach for “normal” thinking and its unquestioned assumptions in order to be efficient at the task around which the culture is focused.
Thus, in both cultures there’s a kind of implicit mysticism. If you recoil from that word because you associate it with anti-rationality I can’t really blame you, but I ask you to consider the idea of mysticism as “techniques for consciousness alteration” detached from any particular beliefs about the universe.
This is why both cultures a have a use for Zen. It is a very well developed school of mystical technique whose connection to religious belief has become tenuous. You can take the Buddhism out of it and the rest is still coherent and interesting.
Perhaps this implicit mysticism is part of the draw for you. It is for me.
I think a collection of examples and analysis would be a post in itself.
But I can give you one suggestive example from Twelve Virtues itself: “If you speak overmuch of the Way you will not attain it.”
It is a Zen idea that the essence of enlightenment cannot be discovered by talking about enlightenment; rather one must put one’s mind in the state where enlightenment is. Moreover, talk and chatter—even about Zen itself—drives that state away.
Eliezer is trying to say here that the the center of rationalist practice is not in what you know about rationality or how much cleverness you can demonstrate to others but in achieving a mental stance that processes evidence correctly and efficiently.
He is borrowing the rhetoric of Zen to say that because, as with Zen, the center of our Way is found in silence and non-attachment. The Way of Zen wants you to lose your attachment to desires; the Way of rationality wants you to lose your attachment to beliefs.
I actually wouldn’t call Zen a “central theme”. More “a recurring rhetorical device”. It’s not Zen Buddhist content that the Sequences use, it’s the emulation of Zen rhetoric as a device to subtly shift the reader’s mental stance.
I described myself as a subject-matter expert in epistemology. That means I’m familiar with the branch of philosophy that considers the maintenance and justification of knowledge. and considers different theories of same.
Since you’re using the name ‘metatroll’, I think I’ll leave it at that.
I know who Deutsch is, and I’d never even heard that he had a movement around him.
Which is relevant. I’ve had my ear to the ground for interesting rationality training since, oh, 1975 or so, and I definitely run in the right circles to pick up rumors of stuff like this. The fact that your report is my first sign for that crew is from my POV pretty good evidence that its impact was very, very low.
I also question some of your other premises. Speaking as a person who approaches the Yudkowskian reform from a perspective formed by a previous rationality movement, I don’t think it has all that much difficulty communicating with outsiders at all, certainly not compared to the culture around General Semantics. To the extent it does: well, science is hard. There’s not much point in trying to pitch the Sequences to people much below the American mean IQ level, at least not before our tutorial techniques get a lot better than they are now.
Nor, speaking as a person with considerable subject-matter expertise in epistemology, do I think this movement has a particularly “immodest” epistemology. If one doesn’t think one’s theory knowledge can explain the justification of knowledge in very broad generality, there’s not much point in maintaining it at all, is there?
Speaking as a semi-outsider, it’s not clear to me that this community has mandatory writings at all. Yes, a lot of us have read parts of the Sequences, if not all (I’m not-all myself) but I see no sign that one’s in-groupness depends on having done that. It’s very easy for me to imagine someone fitting into this movement although never having read a word of Yudkowsky, simply by being able to adopt the community’s discourse habits and its concerns.
There’s a technical problem. My blog is currently frozen due to a stuck database server; I’m trying to rehost it. But I agree to your plan in principle and will discuss it with you when the blog is back up.
Heh. Come to think of it from that angle, “a bit true, but not really” would have been exactly my assessment if I were in your shoes. Thanks, I appreciate the nuanced judgment.
Since you’ve mentioned Rootless Root, I will say that there is another essay I am now thinking of writing about the playful use of Zen tropes. The rationalist community and the hacker culture both have strong traditions of this sort of play...but, the functional reasons for the tradition are not the same! And the way they differ is interesting.
That’s enough of a teaser for now. :-)
I don’t really have an interesting answer, I’m afraid. Busy life, lots of other things to pay attention to, never got around to it before.
Now that I’ve got the idea, I may re-post some rationality-adjacent stuff from my personal blog here so the LW crowd can know it exists.
Author of “Dancing with the Gods” checks in.
First, to confirm that you have correctly understood the points I was trying to make. I intended “Dancing with the Gods” to be a rationalist essay, in the strictest Yudkowskian-reformation sense of the term “rationalist”, even though the beginnings of the reformation were seven years in the future when I wrote it.
<insert timeless-decision-theory joke here>
Second, that I 100% agree with your analysis of why “Meditations on Moloch” was important.
Third and most importantly, to say that I like your use of the term “sandbox” a lot, and I’m going to adopt it. Maintaining a hard distinction between inside the sandbox and outside really is an important tactic for dealing with mythic mode in general, and magic/theurgy in particular.
You got it from infosec jargon, of course, and I’m going to emphasize its use as a verb. A lot of people have damaged themselves through not understanding that they need to sandbox, and a lot of other people (including, as you imply, many rationalists) fear mythic mode unnecessarily because they don’t know that sandboxing is possible.