I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.
I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.
I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.
It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.
In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.
Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here’s hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.
It was a platform MIRI built for discussing their research, that required an invite to post/comment. There’s lots of really interesting stuff there—I remember enjoying reading Jessica Taylor’s many posts summarising intuitions behind different research agendas.
It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.
As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI’s blessing) imported all the old content. At some point we’ll set all the old links to redirect to the AI Alignment Forum too.
agentfoundations.org—lots of good stuff there, but most of it gets very few responses. The recently launched alignmentforum.org is an attempt to do the same thing, but with crossposting to LW.
Very much disagree—but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don’t think rationality works without some community.
First, I don’t think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.
Second, I don’t think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.
Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don’t there were places that actively advocated that members follow those epistemic standards.
To get back to the main point, I don’t think that it is necessary for the community to “fulfill each other’s needs for companionship, friendship, etc,” I don’t think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.
Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU’s economics department, and possibly the new center at Georgetown) I don’t think you’d find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.
I think your comment is unnecessarily hedged—do you think that you’d find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?
I think I understand the connotation of your statement, but it’d be easier to understand if you strengthened “sometimes” to a stronger statement about academia’s inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals—what is the actual claim that distinguishes the communities?
That’s a very good point, I was definitely unclear.
I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.
Oh, I so your complaint us about instrumental rationality. Well, naturally they’re bad at that. Most people are. You don’t get good at doing things by studying rationality in the abstract. EY couldn’t succeed in spending $300k of free money on producing software to his exact specifications.
I was thinking more of epistemic rationality, having given up on instrumental rationality.
I don’t think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.
But caring about rationality, as well as curiosity and skepticism, is a pretty big part of who I am and I want to have a group of people in my life who are okay with that. I want to have people I can be rational around without them being rude or condescending towards me.
This is a fine desire to have. I share it.
And people who have some interest in rationality, any interest at all, are the only people I really feel fully safe around for this reason.
And herein lies your problem.
You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate:
Be friends with people who are awesome. Avoid people who suck.
Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable.
It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close.
Finally:
For example, I want people I can be around an acknowledge that high rents are caused by housing shortages instead of “tech-bros”. I want it to be safe to say that without being accused of mansplaining.
For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.
Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)
The thing is—and here I disagree with your initial comment thread as well—peer pressure is useful. It is spectacularly useful and spectacularly powerful.
How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox—or at least the most powerful tool that generalizes so easily—is to make more friends that already have those traits.
Can this go bad places? Of course it can. It’s a positive feedback cycle with no brakes save the ones we give it. But…
… well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.
(And ‘crowds of humans’, while kind of a pain to herd, are still much much easier than AI.)
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
And sure, 3 is indeed what often happens.
… First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X. It’s easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn’t staying away from the affective death spiral, it’s trying to find the people who are actually trying among them—the ones who, almost definitionally, are not talking nearly as much about it, because “slay the Buddha” is actually surprisingly general advice.
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
EDIT:
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X.
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
Fair. Nevertheless, if the average of the group is around my own level, that’s good enough for me if they’re also actively trying. (Pretty much by definition of the average, really...)
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
… Okay, sorry, two place function. I don’t seem to have much trouble distinguishing.
(And yes, you can reasonably ask how I know I’m right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but… well, at some point that all turns into wasted motions. Let’s just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I’m fairly confident I’ll at least not be easily mislead.)
I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.
I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.
I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.
It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.
In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.
Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here’s hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.
I’ve never even heard of IAFF! What is that?
Edit: oops, cousin_it beat me to it.
The “Intelligent Agent Foundations Forum” at https://agentfoundations.org/.
It was a platform MIRI built for discussing their research, that required an invite to post/comment. There’s lots of really interesting stuff there—I remember enjoying reading Jessica Taylor’s many posts summarising intuitions behind different research agendas.
It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.
As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI’s blessing) imported all the old content. At some point we’ll set all the old links to redirect to the AI Alignment Forum too.
agentfoundations.org—lots of good stuff there, but most of it gets very few responses. The recently launched alignmentforum.org is an attempt to do the same thing, but with crossposting to LW.
Very much disagree—but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don’t think rationality works without some community.
First, I don’t think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.
Second, I don’t think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.
Are you saying that epistemic rationality didn’t exist before the LW community, or that (for instance) academia is an adequate community?
Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don’t there were places that actively advocated that members follow those epistemic standards.
To get back to the main point, I don’t think that it is necessary for the community to “fulfill each other’s needs for companionship, friendship, etc,” I don’t think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.
By whose epistemic standards? And what’s the evidence for the claim?
Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU’s economics department, and possibly the new center at Georgetown) I don’t think you’d find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.
I think your comment is unnecessarily hedged—do you think that you’d find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?
I think I understand the connotation of your statement, but it’d be easier to understand if you strengthened “sometimes” to a stronger statement about academia’s inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals—what is the actual claim that distinguishes the communities?
That’s a very good point, I was definitely unclear.
I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.
Oh, I so your complaint us about instrumental rationality. Well, naturally they’re bad at that. Most people are. You don’t get good at doing things by studying rationality in the abstract. EY couldn’t succeed in spending $300k of free money on producing software to his exact specifications.
I was thinking more of epistemic rationality, having given up on instrumental rationality.
I don’t think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.
And even of those who do preregister, noboby puts down their credence for the likilihood that there’s an effect.
This is a fine desire to have. I share it.
And herein lies your problem.
You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate:
Be friends with people who are awesome. Avoid people who suck.
Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable.
It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close.
Finally:
For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.
Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)
The thing is—and here I disagree with your initial comment thread as well—peer pressure is useful. It is spectacularly useful and spectacularly powerful.
How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox—or at least the most powerful tool that generalizes so easily—is to make more friends that already have those traits.
Can this go bad places? Of course it can. It’s a positive feedback cycle with no brakes save the ones we give it. But…
… well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.
(And ‘crowds of humans’, while kind of a pain to herd, are still much much easier than AI.)
You’re equivocating between the following:
To become more X, find a crowd of people who are more X.
To become more X, find a crowd of people who are trying to be more X.
Perhaps #1 works. But what is actually happening is #2.
… or at least, that’s what we might charitably hope is happening. But actually instead what often happens is:
To become more X, find a crowd of people who are pretending to try to be more X.
And that definitely doesn’t work.
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
And sure, 3 is indeed what often happens.
… First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X. It’s easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn’t staying away from the affective death spiral, it’s trying to find the people who are actually trying among them—the ones who, almost definitionally, are not talking nearly as much about it, because “slay the Buddha” is actually surprisingly general advice.
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
EDIT:
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
Right—they call it the “principle of charity.”
Fair. Nevertheless, if the average of the group is around my own level, that’s good enough for me if they’re also actively trying. (Pretty much by definition of the average, really...)
… Okay, sorry, two place function. I don’t seem to have much trouble distinguishing.
(And yes, you can reasonably ask how I know I’m right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but… well, at some point that all turns into wasted motions. Let’s just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I’m fairly confident I’ll at least not be easily mislead.)