What is the next level of rationality?

This is part 1 of our dialogue series on the question “What is the next level of rationality?”.

lsusr

Yudkowsky published Go Forth and Create the Art! in 2009. It is 2023. You and I agree that, in the last few years, there haven’t been many rationality posts on the level of Eliezer Yudkowsky (and Scott Alexander). In other words, nobody has gone forth and created the art. Isn’t that funny?

What Came Before Eliezer?

Yoav Ravid

Yes, we agreed on that. I remarked that there were a few levels of rationality before Eliezer. The one directly before him was something like the Sagan-Feynman style rationality (who’s fans often wore the label “Skeptics”). But that’s mostly tangential to the point.

lsusr

Or perhaps it’s not tangential to the point at all. Feynman was referenced by name in Harry Potter and the Methods of Rationality. I have a friend in his 20s who is reading Feynman for the first time. He’s discovering things like “you don’t need a labcoat and a PhD to test hypotheses” and “it’s okay to think for yourself”.

Yoav Ravid

How do you see it connecting to the question “What’s the next level of rationality?”

lsusr

Yudkowsky is a single datapoint. The more quality perspectives we have about what “rationality” is, the better we can extrapolate the fit line.

Yoav Ravid

I see, so perhaps a preliminary to this discussion is the question “which level of rationality is Eliezer’s?”?

lsusr

Yeah. Eliezer gets extra attention on LessWrong, but he’s not the only writer on the subject of rationality. I think we should start by asking who’s in this cluster we’re pointing at.

Yoav Ravid

Alright, so in the Feynman-Sagen cluster, I’d also point to Dawkins, Michael Shermer, Sam Harris, Hitchens, and James Randi, for example. Not necessarily because I’m very familiar with their works or find them particularly valuable, but because they seem like central figures in that cluster.

lsusr

Those are all reasonable names, but I’ve never actually read any of their work. My personal list include Penn Jillette. Paul Graham and Bryan Caplan feel important too, even though they’re not branded “skeptic” or “rationality”.

Yoav Ravid

I’ve read a bit, but mostly I just came late enough to the scene and found Eliezer and Scott quickly enough that I didn’t get the chance to read them deeply before then, and after I did I didn’t feel the need.

Yoav Ravid

Yep, and Paul Graham is also someone Eliezer respects a lot, and I think might have even been mentioned in the sequences. I guess you could add various sci-fi authors to the list.

lsusr

Personally, I feel the whole thing started with Socrates. However, by the time I got around to cracking open The Apology, I felt like I had already internalized his ideas.

But I don’t get that impression when I hang out with Rationalists. The median reader of Rationality: A-Z shatters under Socratic dialogue.

Yoav Ravid

I agree, though if we’re trying to cut the history of rationality in periods/​levels, then Socrates is a different (the first) period/​level (Though there’s a sense in which he’s been at a higher level than many who came after him).

Yoav Ravid

I think Socrates’ brilliance came from realizing how little capacity to know they had at the time, and fully developing the skill of not fooling himself. What others did after him was develop mostly the capacity to know, while mostly not paying as much attention to not fooling themselves.

I think the “Skeptics” got on this journey of thinking better and recognizing errors, but were almost completely focused on finding them in others. With Yudkowsky the focus shifted inward in a very Socratic manner, to find your own faults and limitations.

Tangent about Trolling as a core rationality skill

lsusr

I’ve never heard the word “Socratic” used in that way. I like it.

Another similarity Yudkowsky has to Socrates is that they’re both notorious trolls.

Yoav Ravid

That made me laugh. It’s true. I remember stories from the Sequences of Dialogues he had with people who he basically trolled.

lsusr

And there’s a good reason for it. Trolling your students is absolutely necessary when teaching rationality. I troll my students/​friends all the time. When I visited the Lightcone offices in Berkeley, I trolled them too.

Yoav Ravid

Ah, I see that you have written about this.

lsusr

Do you know why trolling is so important?

Yoav Ravid

I’m not sure I understand exactly how you use the concept, so tell me why you think It’s so important.

lsusr

I could explain this in simple words. But I think it would be more fun and more educational if I trolled you instead. Are you okay with that?

Yoav Ravid

Haha, sure :)

lsusr

You’ve convinced me. Trolling is unethical. Rationalist teachers shouldn’t do it. Let’s move on.

Yoav Ravid

lol, I didn’t say anything so I couldn’t have convinced you of anything :)

Perhaps you’ve convinced yourself, but I bet you haven’t and you’re just trolling :)

lsusr

(:

lsusr

A rationalist must be skeptical of authority. Suppose you are a teacher of rationality, and therefore an authority figure. How do you ethically teach your students to be skeptical of you?

Yoav Ravid

As an educator I do think about that a lot. On the one hand I want to tell the students everything I know that would be useful for them to know too, on the other hand I want to account for the possibility that I’m wrong, so I need to develop their ability to scrutinize what I say and check if it’s actually true.

So some teachers solve this by sacrificing either the first or the second part, because doing both of them well is harder, and that’s unfortunate.

When I was in school I had a teacher who was very good at combining these. She’d start a topic by giving us a passionate speech, which made us care and told us what she believes, but then she made us dig into the subject and read various reports and come to our own conclusions. And it worked, many students did come to different conclusions.

I also think back to ‘My Favorite Liar’, where a teacher planted a falsehood in every lecture and told the students about it, so they would scrutinize his lectures to find the intentional error, but in the process also doubt and scrutinize everything else. And I guess you can call that a kind of trolling.

lsusr

Good. Very good!

lsusr

That is indeed a kind of trolling. After all, when a teacher is about to deceive you, she/​he always lets you know in advance that you are about to encounter misinformation. That’s how you know when you need to be skeptical.

lsusr

Do you understand?

Yoav Ravid

I think so. I suggested to specify “intentionally deceive you” and you rejected that. And I thought, but how can he let you know he’s going to deceive you if he’s not doing so intentionally? But since he might deceive you unintentionally all the time, then he has to let you know in advance that you might be deceived and should be skeptical. Is that the idea?

lsusr

[Note to readers: There’s a feature in the LessWrong dialogue interface where Yoav can suggest a change to what I wrote. Yoav did so. I rejected the change.]

lsusr

That is the idea.

Back to “What’s the next level of rationality?”

lsusr

Getting back to our original question, “What’s the next level of rationality? [after Eliezer]”, one of the (many) things he didn’t get around to writing about is how important it is for rationalists to troll each other.

lsusr

Feynman was a troll too, by the way.

Yoav Ravid

Absolutely, even more so than Eliezer, I think. “Surely You’re Joking, Mr. Feynman” is one of the funniest books I’ve read.

lsusr

It’s hilarious.

lsusr

Besides the importance of trolling, what are some other facets of rationality that Eliezer never got around to writing about?

Yoav Ravid

Well, I think the best place to start with is the preface Eliezer wrote in 2015 to ’Rationality: A-Z”, where he lists 5 overarching errors he made in the sequences:

  1. Not writing with the intention of helping people do better in their everyday lives, instead of helping them solve big, difficult, important problems

  2. Focusing too much on how to learn the theory and not enough on how to practice it.

  3. Focusing too much on rational belief, too little on rational action.

  4. Not organized the content in the sequences well (Things are much better now with the new sequences and the LW wiki)

  5. Speaking plainly about the stupidity of what appeared to be stupid ideas, Instead of writing more courteously.

I think the first 3 are relevant to our discussion.

Yoav Ravid

Some other points I’d add (some practical, some foundational/​theoretical)

  1. The sequences and most of LW thereafter focused mainly on how to be more rational as an individual, and not on how to collaborate as rationalists or be more rational as a pair or a group.

  2. It overlooked the value of information in tradition. (things like the Lindy Principle, Chesterton’s fence, etc)

  3. Related, it overlooked how many things, like certain biases, may actually be rational when analyzed better or considering our limitations.

  4. It’s based on Bayesianism, which is a bit like General Relativity in that we know it’s very much correct, but not fully, and there’s something after it that should be even more correct than it. With Bayesianism the problem is that it assumes Logical Omniscience and observing the world from outside.

  5. Most of the foundational problems pointed out in the sequences — anthropic reasoning, reflective reasoning, strange loop circularity — haven’t been solved. And though these aren’t very relevant in day-to-day life, because they either don’t come up or we have an intuition for the answer, these sure would be nice to solve, and it would show that rationality has firm foundations, for those who care about such things.

These have all been addressed to some degree after the sequences were written, of course, so this is not new or anything, and many of these are in the “water supply” to a degree (especially the value of information in tradition).

But it shows that we have no rationalist canon that actually encompasses modern rationalist thought, which is something we would need to foster a new phase/​level of rationality.

lsusr

This is very helpful. You’re pointing at topics I’ve wanted to write about, but have been unsure of how to approach. For example, I want to write a post about the benefits of hypocracy. (Most religious people are hypocrites. If you cure the hypocracy, some they may turn toward rationality, but others just end up as fundamentalist extremists.) It falls under your “overlooked value of tradition” umbrella.

But I think the most promising point might be “how to collaborate as rationalists or be more rational as a pair or a group”. This wasn’t so important when Eliezer was starting. After all, there was little community to coordinate. But I’ve been doing many Socratic dialogues, and often the first thing I have to do is teach my partner how to have a Socratic dialogue.

lsusr

That connects to “helping people do better in their everyday lives” and “[f]ocusing too much on how to learn the theory and not enough on how to practice it” too.

Yoav Ravid

Yes, it was one of the first things that I wanted to write about on LW (I have a draft on pair rationality from January 2020), but I didn’t feel I have a lot to say about it and I didn’t have anyone else in my personal life who’s as interested in rationality as me (still don’t), so I didn’t have the opportunity to develop that part on my own.

lsusr

It’s pretty hard to develop the art of Socratic dialogue on your own. 😛😛

I’ve got a lot to say about Socratic dialogues but, as you pointed out, my writing is often very difficult for people to interact with.

I think the root problem is that when I’m writing for an abstract audience, I’m awful at guessing what readers will and won’t understand. That’s why I like these dialogues so much. I can just ask “Do you understand?”

Yoav Ravid

And it’s working, I’m experiencing none of the difficulties I tend to experience with your writing.

lsusr

Then perhaps the next step of this rationality project is for you and me to do a Socratic dialogue about “how to do a Socratic dialogue”.

Yoav Ravid

Alright, that sounds good. Let’s pick it up from there next time :)

lsusr

(: