If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.
Why?
Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein’s theory more acceptable later; no one would criticize him for being different from Newton.
Okay, I don’t really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!
Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because… well, because level 3 is better than level 2, obviously. And if those people perhaps can’t make the jump, I guess they simply were not meant to be helped.
This is why I wrote about “the low-hanging fruit that most rationalists wouldn’t even touch for… let’s admit it… status reasons”. We are (or imagine ourselves to be) at level 3, and all levels below us are equally deplorable. Helping someone else to get on level 3, that’s a worthy endeavor. Helping people get from level 1 to level 2, that’s just pathetic, because the whole level 2 is pathetic. Even if we could do that at a fraction of the cost.
Maybe that’s true when building a superhuman artificial intelligence (better getting it hundred years later than getting it wrong), but it doesn’t apply for most areas of human life. Usually, an improvement is an improvement, even when it’s not perfect.
Making all people rationalists could be totally awesome. But making many stupid people slightly less stupid, that’s also useful.
Let’s start with a false statement from one of Gleb’s articles:
Intuitively, we feel our mind to be a cohesive whole, and perceive ourselves as intentional and rational thinkers. Yet cognitive science research shows that in reality, the intentional part of our mind is like a little rider on top of a huge elephant of emotions and intuitions. This is why researchers frequently divide our mental processes into two different systems of dealing with information, the intentional system and the autopilot system.
What’s false? Researchers don’t use the terms “intentional system” and “autopilot system”.
Why is that the problem? Aren’t the terms near enough to system I and system II?
A person who’s interested might want to read additional literature on the subject. The fact that the terms Gleb invented don’t match with the existing literature means that it’s harder for a person to go from reading Gleb articles to reading higher level material.
If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the “intentional system” and “autopilot system” and find that the neuroscientist hasn’t heard of making the distinction in those terms.
It might take a while till they understand that deception happened but it might hinder them from propressing.
I think talking about system I and system II in the way Gleb does raises the risk of readers coming a way with believing that reflective thinking is superior to intuitive thinking. It suggests that it’s about using system II for important issues instead of focusing on aligning system I and system II with each other the way CFAR proposes.
The stereotype of people who categorically prefer system II to system I is straw-vulcan’s. Level 2 of rationality is not “being a straw-vulcan”.
In the article on his website Gleb says:
The intentional system reflects our rational thinking, and centers around the prefrontal cortex, the part of the brain that evolved more recently.
That sounds to me like neurobabble. Kahnmann doesn’t say that system II is about a specific part of the brain.
Even if it would be completely true, having that knowledge doesn’t help a person to be more rational. If you want to make a message as simple as possible you could drop that piece of information without any problem.
Why doesn’t he drop it and make the article simpler? Because it helps with pushing an ideology. What other people in this thread called rationality as religion. The rationality that fills someone sense of belong to a group.
I don’t see that people rationality get’s raised in the process of that.
That leads to the question of “what are the basics of rationality?”
I think the facebook group provides sometimes a good venue to understand what new people get wrong. Yesterday one person accused another of being a fake account. I asked the accuser for his credence but he replied that he can’t give a probability for something like that. The accuser didn’t thought in terms of Cromwell’s rule.
Making that step from thinking “you are a fake account” to having a mental category of “80% certainty: you are a fake account” is progress. No neuroscience is needed to make that progress.
Rationality for beginners could attempt to teach Cromwell’s rule while keeping it as simple as possible. I’m even okay if the term Cromwell’s rule doesn’t appear. The article can have pretty pictures, but it shouldn’t make any false claims.
I admit that “What are the basics of rationality?” isn’t an easy question. This community often complicates things.
Scott recently wrote what developmental milestones are you missing. That article list 4 milestones with one of them being Cromwell’s rule (Scott doesn’t name it).
In my current view of rationality other basics might be TAPs, noticing, tiny habits, “how not to be a straw-vulcan” and “have conversation with the goal of learning something new yourself, instead of having the goal of just effecting the other person”.
A good way to searching for basics might also be to notice events where you yourself go: “Why doesn’t this other person get how the world works, X is obvious to people at LW, why to I have to suffer from living in a world where people don’t get X?”.
I don’t think the answer to that question will be that people think that the prefrontal cortex is about system II thinking.
I agree with much of this, but that quote isn’t a false claim. It does not (quite) say that researchers use the terms “intentional system” and “autopilot system”, which seem like sensible English descriptions if for some bizarre reason you can’t use the shorter names. Now, I don’t know why anyone would avoid the scholarly names when for once those make sense—but I’ve also never tried to write an article for Lifehack.
What is your credence for the explanation you give, considering that eg the audience may remember reading about many poorly-supported systems with levels numbered I and II—seeing a difference between that and the recognition that humans evolved may be easier for some then evaluating journal citations.
which seem like sensible English descriptions if for some bizarre reason you can’t use the shorter names
The motivation of Kahnmann to use system I and system II isn’t to have shorter names. It’s that there are existing conceptions among people about words describing mental concepts and he doesn’t want to use them.
Wikipedia list from Kahnmann:
In the book’s first section, Kahneman describes two different ways the brain forms thoughts:
System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious System 2: Slow, effortful, infrequent, logical, calculating, conscious
Emotional/logical is a different distinction then intentional/autopilot. Trained people can shut on and off emotions via their intentions and the process has little to do with being logical or calculating.
But even given them new names that scientists don’t give them might be a valid move. If you how even do that then you should be open about the fact that you invented new names.
Given science public nature I also think that you should be open about why you choose certain terms and choosing new terms should come with an explanation of why you prefer them over alternatives.
The reason shouldn’t be that your organisation is named “intentional insights” and that’s why you call it the “intentional system”. Again that pattern leads to the rationality is about using system II instead of system I position with differ from the CFAR position.
In Gleb’s own summary of Thinking Fast and slow he writes:
System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings.
Given that in Kahnmann’s framework intentions are generated by system I, calling system II the “intentional system” produces problems.
What is your credence for the explanation you give,
Explanations don’t have credence, predictions do. If you specify a prediction I can give you my credence for it.
Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!
Did you ever read about Feynman’s experience reading science textbooks for elementary school? (It’s available online here.)
There are good and bad ways to simplify.
This is why I wrote about “the low-hanging fruit that most rationalists wouldn’t even touch for… let’s admit it… status reasons”.
Sure, there are people I’d rather not join the LessWrong community for status reasons. But I don’t think the resistance here is about status instead of methodology. Yes, it would be nice to have organizations devoted to helping people get from level 1 to level 2, but if you were closing your eyes and designing such an organization, would it look like this?
(Both agreeing with and refining your position, and directed less to you than the audience):
Personally, I’m at level 21, and I’m trying to raise the rest of you to my level.
Now, before you take that as a serious statement, ask yourself how you feel about that proposition, and how inclined you would be to take anything I said seriously if I actually believed that. Think about to what extent I behave like I -do- believe that, and how that changes the way what I say is perceived.
http://lesswrong.com/lw/m70/visions_and_mirages_the_sunk_cost_dilemma/ ← This post, and pretty much all of my comments, had reasonably high upvotes before I revealed what I was up to. Now, I’m not going to say it didn’t deserve to get downvoted—I learned a lot from that post that I should have known going into it—but I’d like to point out the fundamental similarities, but scaled up a level, between what I do there, and typical rationalist “education”. “Here’s a thing. It was a trick! Look at how easily I tricked you! You should now listen to what I say about how to avoid getting tricked in the future.” Worse, cognitive dissonance will make it harder to fix that weakness in the future. As I said, I learned a -lot- in that post; I tried to shove at least four levels of plots and education into it, and instead, turned people off with the first or second one. I hope I taught people something, but in retrospect, and far removed from it, I think it was probably a complete and total failure which mostly served to alienate people from the lessons I was attempted to impart.
The first step to making stupid people slightly less stupid is to make them realize the way in which they’re stupid in the first place, so that they become willing to fix it. But you can’t do that, because, obviously, people really dislike being told they’re stupid. Because there are some issues inherent in approaching other people with the assumption that they’re less than you, and that they should accept your help in raising them up. You’re asserting a higher status than them. They’re going to resent that, and cognitive dissonance is going to make them decide that the thing you’re better at, either you aren’t, or that it isn’t that important. So if you think that you can make “stupid people slightly less stupid”, you’re completely incompetent at the task.
But… show them that -you- are stupid, and show them you becoming less stupid, and cognitive dissonance will tell them that they were smarter than you, and that they already knew what you were trying to teach them. That’s a huge part of what made the Sequences so successful—riddled throughout it were admissions of Eliezer’s own weakness. “This is a mistake I made. This is what I realized. This is how I started to get past that mistake.” What made them failures, however, is the way they made those who read them feel Enlightened, like they had just Leveled Up twenty times and were now far above ordinary plebeians. The critical failure of the Sequences is that they didn’t teach humility; the lesson you -should- come away from them with is the idea that, however much Less Wrong you’ve become, you’re still deeply, deeply wrong. And that’s okay.
Which provokes a dilemma. Everybody who wants to teach rationality to others, because it leveled them up twenty times and look at those stupid people falling prey to the non-central fallacy on a constant basis, are completely unsuitable to do so.
This is a pretty confusing point. I have plenty of articles where I admit my failures and discuss how I learned to succeed.
Secondly, I have only started publishing on Lifehacker—published 3 so far—and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.
BTW, curious if any of these discussions have caused you to update on any of your claims to any extent?
I now assign negligible odds to the possibility that you’re a sociopath (used as a shorthand for any of a number of hostile personality disorders) masquerading as a normal person masquerading as a sociopath, and somewhat lower odds on you being a sociopath outright, with the majority of assigned probability concentrating on “normal person masquerading as sociopath” now. (Whether that’s how you would describe what you do or not, that’s how I would describe it, because the way you write lights up my “Predator” alarm board like a nearby nuke lights up a “Check Engine” light.)
The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.
Demonstrable evidence that you do so better than average isn’t the same as demonstrable evidence that you do so well.
Thanks for sharing about your updating! I am indeed a normal person, and have to put a lot of effort into this style of writing for the sake of what I perceive as a beneficial outcome.
I personally have updated away from you trolling me and see you as more engaged in a genuine debate and discussion. I see we have vastly different views on the methods of getting there, but we do seem to have broadly shared goals.
Fair enough on different interpretations of the word “well.” As I said, my articles have done twice as well as the average for Lifehack articles, so we can both agree that it is demonstrable evidence of a significant and above-average level of competency on an area where I am just starting − 3 articles so far—although the term “well” is more fuzzy.
The critical failure of the Sequences is that they didn’t teach humility; the lesson you -should- come away from them with is the idea that, however much Less Wrong you’ve become, you’re still deeply, deeply wrong.
Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A.
The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it’s right there in the name that one is trying to asymptotically remove wrongness.
The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn’t give people gold stars that say “yep, you got the humility part down,” and unsurprisingly people are not as good at determining that themselves as they’d like to be.
Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A.
Then perhaps you’ve framed the problem you’re trying to solve in this thread wrong. [ETA: Whoops. Thought I was talking to Villiam. This makes less-than-sense directed to you.]
The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it’s right there in the name that one is trying to asymptotically remove wrongness.
I don’t think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again. Eliezer learned humility through making mistakes, mistakes he learned from; the practice of teaching rationality is the practice of having students skip those mistakes.
The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn’t give people gold stars that say “yep, you got the humility part down,” and unsurprisingly people are not as good at determining that themselves as they’d like to be.
Then perhaps you’ve framed the problem you’re trying to solve in this thread wrong.
Oh, I definitely agree with you that trying to teach rationality to others to fix them, instead of providing a resource for interested people to learn rationality, is deeply mistaken. Where I disagree with you is the (implicit?) claim that the Sequences were written to teach instead of being a resource for learning.
I don’t think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again.
Mmm. I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don’t have to be. I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they’re noticed, rather than when it’s no longer possible to maintain them.
Ah! My apologies. Thought I was talking to Villiam. My responses may have made less than perfect sense.
I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don’t have to be.
You can learn from mistakes, but you don’t learn what it feels like to make mistakes (which is to say, exactly the same as making the right decision).
I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they’re noticed, rather than when it’s no longer possible to maintain them.
That’s where humility is important, and where the experience of having made mistakes helps. Making mistakes doesn’t feel any different from not making mistakes. There’s a sense that I wouldn’t make that mistake, once warned about it—and thinking you won’t make a mistake is itself a mistake, quite obviously. Less obviously, thinking you will make mistakes, but that you’ll necessarily notice them, is also a mistake.
Why?
Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein’s theory more acceptable later; no one would criticize him for being different from Newton.
Okay, I don’t really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!
Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because… well, because level 3 is better than level 2, obviously. And if those people perhaps can’t make the jump, I guess they simply were not meant to be helped.
This is why I wrote about “the low-hanging fruit that most rationalists wouldn’t even touch for… let’s admit it… status reasons”. We are (or imagine ourselves to be) at level 3, and all levels below us are equally deplorable. Helping someone else to get on level 3, that’s a worthy endeavor. Helping people get from level 1 to level 2, that’s just pathetic, because the whole level 2 is pathetic. Even if we could do that at a fraction of the cost.
Maybe that’s true when building a superhuman artificial intelligence (better getting it hundred years later than getting it wrong), but it doesn’t apply for most areas of human life. Usually, an improvement is an improvement, even when it’s not perfect.
Making all people rationalists could be totally awesome. But making many stupid people slightly less stupid, that’s also useful.
Let’s start with a false statement from one of Gleb’s articles:
What’s false? Researchers don’t use the terms “intentional system” and “autopilot system”.
Why is that the problem? Aren’t the terms near enough to system I and system II? A person who’s interested might want to read additional literature on the subject. The fact that the terms Gleb invented don’t match with the existing literature means that it’s harder for a person to go from reading Gleb articles to reading higher level material.
If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the “intentional system” and “autopilot system” and find that the neuroscientist hasn’t heard of making the distinction in those terms. It might take a while till they understand that deception happened but it might hinder them from propressing.
I think talking about system I and system II in the way Gleb does raises the risk of readers coming a way with believing that reflective thinking is superior to intuitive thinking. It suggests that it’s about using system II for important issues instead of focusing on aligning system I and system II with each other the way CFAR proposes. The stereotype of people who categorically prefer system II to system I is straw-vulcan’s. Level 2 of rationality is not “being a straw-vulcan”.
In the article on his website Gleb says:
That sounds to me like neurobabble. Kahnmann doesn’t say that system II is about a specific part of the brain. Even if it would be completely true, having that knowledge doesn’t help a person to be more rational. If you want to make a message as simple as possible you could drop that piece of information without any problem.
Why doesn’t he drop it and make the article simpler? Because it helps with pushing an ideology. What other people in this thread called rationality as religion. The rationality that fills someone sense of belong to a group.
I don’t see that people rationality get’s raised in the process of that. That leads to the question of “what are the basics of rationality?”
I think the facebook group provides sometimes a good venue to understand what new people get wrong. Yesterday one person accused another of being a fake account. I asked the accuser for his credence but he replied that he can’t give a probability for something like that. The accuser didn’t thought in terms of Cromwell’s rule. Making that step from thinking “you are a fake account” to having a mental category of “80% certainty: you are a fake account” is progress. No neuroscience is needed to make that progress.
Rationality for beginners could attempt to teach Cromwell’s rule while keeping it as simple as possible. I’m even okay if the term Cromwell’s rule doesn’t appear. The article can have pretty pictures, but it shouldn’t make any false claims.
I admit that “What are the basics of rationality?” isn’t an easy question. This community often complicates things. Scott recently wrote what developmental milestones are you missing. That article list 4 milestones with one of them being Cromwell’s rule (Scott doesn’t name it).
In my current view of rationality other basics might be TAPs, noticing, tiny habits, “how not to be a straw-vulcan” and “have conversation with the goal of learning something new yourself, instead of having the goal of just effecting the other person”.
A good way to searching for basics might also be to notice events where you yourself go: “Why doesn’t this other person get how the world works, X is obvious to people at LW, why to I have to suffer from living in a world where people don’t get X?”. I don’t think the answer to that question will be that people think that the prefrontal cortex is about system II thinking.
I agree with much of this, but that quote isn’t a false claim. It does not (quite) say that researchers use the terms “intentional system” and “autopilot system”, which seem like sensible English descriptions if for some bizarre reason you can’t use the shorter names. Now, I don’t know why anyone would avoid the scholarly names when for once those make sense—but I’ve also never tried to write an article for Lifehack.
What is your credence for the explanation you give, considering that eg the audience may remember reading about many poorly-supported systems with levels numbered I and II—seeing a difference between that and the recognition that humans evolved may be easier for some then evaluating journal citations.
The motivation of Kahnmann to use system I and system II isn’t to have shorter names. It’s that there are existing conceptions among people about words describing mental concepts and he doesn’t want to use them.
Wikipedia list from Kahnmann:
Emotional/logical is a different distinction then intentional/autopilot. Trained people can shut on and off emotions via their intentions and the process has little to do with being logical or calculating.
But even given them new names that scientists don’t give them might be a valid move. If you how even do that then you should be open about the fact that you invented new names. Given science public nature I also think that you should be open about why you choose certain terms and choosing new terms should come with an explanation of why you prefer them over alternatives.
The reason shouldn’t be that your organisation is named “intentional insights” and that’s why you call it the “intentional system”. Again that pattern leads to the rationality is about using system II instead of system I position with differ from the CFAR position.
In Gleb’s own summary of Thinking Fast and slow he writes:
Given that in Kahnmann’s framework intentions are generated by system I, calling system II the “intentional system” produces problems.
Explanations don’t have credence, predictions do. If you specify a prediction I can give you my credence for it.
It might be worth correcting “Greb” and “Greg” to “Gleb” in that, to forestall confusion.
Thanks.
Did you ever read about Feynman’s experience reading science textbooks for elementary school? (It’s available online here.)
There are good and bad ways to simplify.
Sure, there are people I’d rather not join the LessWrong community for status reasons. But I don’t think the resistance here is about status instead of methodology. Yes, it would be nice to have organizations devoted to helping people get from level 1 to level 2, but if you were closing your eyes and designing such an organization, would it look like this?
(Both agreeing with and refining your position, and directed less to you than the audience):
Personally, I’m at level 21, and I’m trying to raise the rest of you to my level.
Now, before you take that as a serious statement, ask yourself how you feel about that proposition, and how inclined you would be to take anything I said seriously if I actually believed that. Think about to what extent I behave like I -do- believe that, and how that changes the way what I say is perceived.
http://lesswrong.com/lw/m70/visions_and_mirages_the_sunk_cost_dilemma/ ← This post, and pretty much all of my comments, had reasonably high upvotes before I revealed what I was up to. Now, I’m not going to say it didn’t deserve to get downvoted—I learned a lot from that post that I should have known going into it—but I’d like to point out the fundamental similarities, but scaled up a level, between what I do there, and typical rationalist “education”. “Here’s a thing. It was a trick! Look at how easily I tricked you! You should now listen to what I say about how to avoid getting tricked in the future.” Worse, cognitive dissonance will make it harder to fix that weakness in the future. As I said, I learned a -lot- in that post; I tried to shove at least four levels of plots and education into it, and instead, turned people off with the first or second one. I hope I taught people something, but in retrospect, and far removed from it, I think it was probably a complete and total failure which mostly served to alienate people from the lessons I was attempted to impart.
The first step to making stupid people slightly less stupid is to make them realize the way in which they’re stupid in the first place, so that they become willing to fix it. But you can’t do that, because, obviously, people really dislike being told they’re stupid. Because there are some issues inherent in approaching other people with the assumption that they’re less than you, and that they should accept your help in raising them up. You’re asserting a higher status than them. They’re going to resent that, and cognitive dissonance is going to make them decide that the thing you’re better at, either you aren’t, or that it isn’t that important. So if you think that you can make “stupid people slightly less stupid”, you’re completely incompetent at the task.
But… show them that -you- are stupid, and show them you becoming less stupid, and cognitive dissonance will tell them that they were smarter than you, and that they already knew what you were trying to teach them. That’s a huge part of what made the Sequences so successful—riddled throughout it were admissions of Eliezer’s own weakness. “This is a mistake I made. This is what I realized. This is how I started to get past that mistake.” What made them failures, however, is the way they made those who read them feel Enlightened, like they had just Leveled Up twenty times and were now far above ordinary plebeians. The critical failure of the Sequences is that they didn’t teach humility; the lesson you -should- come away from them with is the idea that, however much Less Wrong you’ve become, you’re still deeply, deeply wrong. And that’s okay.
Which provokes a dilemma. Everybody who wants to teach rationality to others, because it leveled them up twenty times and look at those stupid people falling prey to the non-central fallacy on a constant basis, are completely unsuitable to do so.
So did I succeed? Or did I fail? And why?
This is a pretty confusing point. I have plenty of articles where I admit my failures and discuss how I learned to succeed.
Secondly, I have only started publishing on Lifehacker—published 3 so far—and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.
BTW, curious if any of these discussions have caused you to update on any of your claims to any extent?
I now assign negligible odds to the possibility that you’re a sociopath (used as a shorthand for any of a number of hostile personality disorders) masquerading as a normal person masquerading as a sociopath, and somewhat lower odds on you being a sociopath outright, with the majority of assigned probability concentrating on “normal person masquerading as sociopath” now. (Whether that’s how you would describe what you do or not, that’s how I would describe it, because the way you write lights up my “Predator” alarm board like a nearby nuke lights up a “Check Engine” light.)
Demonstrable evidence that you do so better than average isn’t the same as demonstrable evidence that you do so well.
Thanks for sharing about your updating! I am indeed a normal person, and have to put a lot of effort into this style of writing for the sake of what I perceive as a beneficial outcome.
I personally have updated away from you trolling me and see you as more engaged in a genuine debate and discussion. I see we have vastly different views on the methods of getting there, but we do seem to have broadly shared goals.
Fair enough on different interpretations of the word “well.” As I said, my articles have done twice as well as the average for Lifehack articles, so we can both agree that it is demonstrable evidence of a significant and above-average level of competency on an area where I am just starting − 3 articles so far—although the term “well” is more fuzzy.
Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A.
The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it’s right there in the name that one is trying to asymptotically remove wrongness.
The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn’t give people gold stars that say “yep, you got the humility part down,” and unsurprisingly people are not as good at determining that themselves as they’d like to be.
Then perhaps you’ve framed the problem you’re trying to solve in this thread wrong. [ETA: Whoops. Thought I was talking to Villiam. This makes less-than-sense directed to you.]
I don’t think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again. Eliezer learned humility through making mistakes, mistakes he learned from; the practice of teaching rationality is the practice of having students skip those mistakes.
He shouldn’t, even if he could.
Oh, I definitely agree with you that trying to teach rationality to others to fix them, instead of providing a resource for interested people to learn rationality, is deeply mistaken. Where I disagree with you is the (implicit?) claim that the Sequences were written to teach instead of being a resource for learning.
Mmm. I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don’t have to be. I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they’re noticed, rather than when it’s no longer possible to maintain them.
Ah! My apologies. Thought I was talking to Villiam. My responses may have made less than perfect sense.
You can learn from mistakes, but you don’t learn what it feels like to make mistakes (which is to say, exactly the same as making the right decision).
That’s where humility is important, and where the experience of having made mistakes helps. Making mistakes doesn’t feel any different from not making mistakes. There’s a sense that I wouldn’t make that mistake, once warned about it—and thinking you won’t make a mistake is itself a mistake, quite obviously. Less obviously, thinking you will make mistakes, but that you’ll necessarily notice them, is also a mistake.
The solution to the meta-level confusion (it’s turtles all the way down, anyway) is to spend a few years building up an immunity to iocane powder.