I support the opposite perspective—it was wrong to ever focus on individual winning and we should drop the slogan.
“Rationalists should win” was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more “rational”.
But this got caught up in excitement around “instrumental rationality”—the idea that the “epistemic rationality” skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.
I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can’t deny this makes sense. I can just point out that it doesn’t resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it’s possible (and important) to analyze this phenomenon and see what’s going on. But the point is that this will involve analyzing a phenomenon—ie truth-seeking, ie epistemic rationality, ie the thing we’re good at and which is our comparative advantage—and not winning immediately.
Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I’m not sure you can short-circuit the “spend two thousand years flailing around and being terrible” step. It doesn’t seem like this community has.
And I’m worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way “Schools Proliferating Without Evidence” condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it’s almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don’t want to say with certainty that they aren’t right—some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it’s just a very strange way of understanding and visualizing those nerves’ behavior. But there’s a big difference between me saying “for all I know maybe...” and a community where people are going around saying “do chakras! they really work!” But if you want to be a self-help community, you don’t have a lot of other options.
I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says “Hey, are we sure we shouldn’t go back to being pure truth-seekers?”, it’s going to be a very different community that discusses the answer to that question.
We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don’t think the rationalist community’s contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on “winning” to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality’s contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than “self-help”.
One of my favorite examples is Roy Baumeister’s book Willpower which he published in 2011. He’s a professor who got two years later highest award given by the Association for Psychological Science, the William James Fellow award.
The book builds on a bunch of not-replicateable science and goes on to recommend that people should eat sugar to improve their Willpower, in a way that maps well to what Feymann describes as Cargo Cult science. We know the bad effects of sugar on the human body.
Here we have a distinguished psychologists who wrote in this decade a book that does the equivalent of recommending bloodletting. That’s not a community with high epistemic norms.
You Scott recently wrote a post where you were suprised that neuroscience as a field messes up a question such as neurogenesis. Given the track record of the community that should be no suprise as they are largely doing the thing Feymann called Cargo Cult Science. They even publish papers that constantly say that they can predict things better then theoretically possible.
Everybody tries to succeed at his life. It feels to me like “not do self-help” because it might lead you to believe wrong things is like “don’t reroute the trolley car” because rerouting makes you kill people. Taking self-help seriously will lead to expose to nontrivial effects that various self-help paradigms produce.
Is the point of the analogy you are trying to make that we should be less like Hippocrates and more like the wise ladies? That we should ignore all persuit of health?
There are a lot of things that produce interesting effects and if the only interesting effect you experienced is playing with chakra’s and you as a result recommend chakra’s, I’m not sure that expose to self-help is the main issue.
When being inside our community I don’t focus on spreading concepts because they produce interesting effects but on those self-help things like Focusing or Internal Double Crux that provide insight in addition to produce interesting effects or produce results.
Is the point of the analogy you are trying to make that we should be less like Hippocrates and more like the wise ladies? That we should ignore all persuit of health?
Is the better argument not that the wise ladies were onto something? Traditional medicines are a mixed bag, but some herbal remedies are truly effective and have since been integrated into scientific medicinal practices. Rather than inventing his own theoretical framework, Hippocrates would have been better-served by investigating the existing herbal practices and trying to identify the truly-effective from the placebo. Trial-and-error is a form of empiricism, after all—and this seems to be how cultural knowledge like herbal medicine came to be.
I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.
I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.
I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.
It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.
In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.
Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here’s hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.
It was a platform MIRI built for discussing their research, that required an invite to post/comment. There’s lots of really interesting stuff there—I remember enjoying reading Jessica Taylor’s many posts summarising intuitions behind different research agendas.
It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.
As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI’s blessing) imported all the old content. At some point we’ll set all the old links to redirect to the AI Alignment Forum too.
agentfoundations.org—lots of good stuff there, but most of it gets very few responses. The recently launched alignmentforum.org is an attempt to do the same thing, but with crossposting to LW.
Very much disagree—but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don’t think rationality works without some community.
First, I don’t think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.
Second, I don’t think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.
Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don’t there were places that actively advocated that members follow those epistemic standards.
To get back to the main point, I don’t think that it is necessary for the community to “fulfill each other’s needs for companionship, friendship, etc,” I don’t think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.
Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU’s economics department, and possibly the new center at Georgetown) I don’t think you’d find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.
I think your comment is unnecessarily hedged—do you think that you’d find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?
I think I understand the connotation of your statement, but it’d be easier to understand if you strengthened “sometimes” to a stronger statement about academia’s inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals—what is the actual claim that distinguishes the communities?
That’s a very good point, I was definitely unclear.
I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.
Oh, I so your complaint us about instrumental rationality. Well, naturally they’re bad at that. Most people are. You don’t get good at doing things by studying rationality in the abstract. EY couldn’t succeed in spending $300k of free money on producing software to his exact specifications.
I was thinking more of epistemic rationality, having given up on instrumental rationality.
I don’t think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.
But caring about rationality, as well as curiosity and skepticism, is a pretty big part of who I am and I want to have a group of people in my life who are okay with that. I want to have people I can be rational around without them being rude or condescending towards me.
This is a fine desire to have. I share it.
And people who have some interest in rationality, any interest at all, are the only people I really feel fully safe around for this reason.
And herein lies your problem.
You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate:
Be friends with people who are awesome. Avoid people who suck.
Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable.
It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close.
Finally:
For example, I want people I can be around an acknowledge that high rents are caused by housing shortages instead of “tech-bros”. I want it to be safe to say that without being accused of mansplaining.
For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.
Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)
The thing is—and here I disagree with your initial comment thread as well—peer pressure is useful. It is spectacularly useful and spectacularly powerful.
How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox—or at least the most powerful tool that generalizes so easily—is to make more friends that already have those traits.
Can this go bad places? Of course it can. It’s a positive feedback cycle with no brakes save the ones we give it. But…
… well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.
(And ‘crowds of humans’, while kind of a pain to herd, are still much much easier than AI.)
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
And sure, 3 is indeed what often happens.
… First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X. It’s easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn’t staying away from the affective death spiral, it’s trying to find the people who are actually trying among them—the ones who, almost definitionally, are not talking nearly as much about it, because “slay the Buddha” is actually surprisingly general advice.
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
EDIT:
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X.
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
Fair. Nevertheless, if the average of the group is around my own level, that’s good enough for me if they’re also actively trying. (Pretty much by definition of the average, really...)
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
… Okay, sorry, two place function. I don’t seem to have much trouble distinguishing.
(And yes, you can reasonably ask how I know I’m right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but… well, at some point that all turns into wasted motions. Let’s just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I’m fairly confident I’ll at least not be easily mislead.)
I think it’s possible (and important) to analyze this phenomenon and see what’s going on. But the point is that this will involve analyzing a phenomenon—ie truth-seeking, ie epistemic rationality, ie the thing we’re good at and which is our comparative advantage—and not winning immediately.
I mostly agree with this, but want to point at something that your comment didn’t really cover, that “whether to go to the homeopath or the doctor” is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you’ve separated it into “what advice should I follow?” and “what advice is out there?”]
But this requires that the question of how to evaluate strategies be framed more in terms of “I used my judgment to weigh evidence” and less in terms of “I followed the prestige” or “I compared the lengths of their articulated justifications” or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 2000 will wrongly pick homeopathy, but ideally a rationalist would switch from homeopathy to doctors as the actual facts on the ground change.
This doesn’t mean a rationalist in 1820 should be satisfied with homeopathy; it should be known to them as a temporary plug to a hole in their map. But that also doesn’t mean it’s the most interesting or important hole in their map; probably then they’d be most interested in what’s up with electricity. [Similarly, today I’m somewhat confused about what’s going on with diet, and have some ‘reasonable’ guesses and some ‘woo’ guesses, but it’s clearly not the most interesting hole in my map.]
And so my sense is a rationalist in 2018 should know what they know, and what they don’t, and be scientific about things to the degree that they capture their curiosity (which relates both to ‘irregularities in the map’ and ‘practically useful’). Which is basically how I read your comment, except that you seem more worried about particular things than I am.
I’m not sure you can short-circuit the “spend two thousand years flailing around and being terrible” step.
It sure seems like you should be able to do better than spending literally two thousand years. There are much better existing methodologies now than there were then.
Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke.
I would be very vary with using him as an example because the public image of him is very much determined by the media.
He did succeed at getting a degree at Wharton Business school.
Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.
I support the opposite perspective—it was wrong to ever focus on individual winning and we should drop the slogan.
“Rationalists should win” was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more “rational”.
But this got caught up in excitement around “instrumental rationality”—the idea that the “epistemic rationality” skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.
I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can’t deny this makes sense. I can just point out that it doesn’t resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it’s possible (and important) to analyze this phenomenon and see what’s going on. But the point is that this will involve analyzing a phenomenon—ie truth-seeking, ie epistemic rationality, ie the thing we’re good at and which is our comparative advantage—and not winning immediately.
Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I’m not sure you can short-circuit the “spend two thousand years flailing around and being terrible” step. It doesn’t seem like this community has.
And I’m worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way “Schools Proliferating Without Evidence” condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it’s almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don’t want to say with certainty that they aren’t right—some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it’s just a very strange way of understanding and visualizing those nerves’ behavior. But there’s a big difference between me saying “for all I know maybe...” and a community where people are going around saying “do chakras! they really work!” But if you want to be a self-help community, you don’t have a lot of other options.
I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says “Hey, are we sure we shouldn’t go back to being pure truth-seekers?”, it’s going to be a very different community that discusses the answer to that question.
We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don’t think the rationalist community’s contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on “winning” to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality’s contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than “self-help”.
There’s a lot in the word “woo”.
One of my favorite examples is Roy Baumeister’s book Willpower which he published in 2011. He’s a professor who got two years later highest award given by the Association for Psychological Science, the William James Fellow award.
The book builds on a bunch of not-replicateable science and goes on to recommend that people should eat sugar to improve their Willpower, in a way that maps well to what Feymann describes as Cargo Cult science. We know the bad effects of sugar on the human body.
Here we have a distinguished psychologists who wrote in this decade a book that does the equivalent of recommending bloodletting. That’s not a community with high epistemic norms.
You Scott recently wrote a post where you were suprised that neuroscience as a field messes up a question such as neurogenesis. Given the track record of the community that should be no suprise as they are largely doing the thing Feymann called Cargo Cult Science. They even publish papers that constantly say that they can predict things better then theoretically possible.
Everybody tries to succeed at his life. It feels to me like “not do self-help” because it might lead you to believe wrong things is like “don’t reroute the trolley car” because rerouting makes you kill people. Taking self-help seriously will lead to expose to nontrivial effects that various self-help paradigms produce.
Is the point of the analogy you are trying to make that we should be less like Hippocrates and more like the wise ladies? That we should ignore all persuit of health?
There are a lot of things that produce interesting effects and if the only interesting effect you experienced is playing with chakra’s and you as a result recommend chakra’s, I’m not sure that expose to self-help is the main issue.
When being inside our community I don’t focus on spreading concepts because they produce interesting effects but on those self-help things like Focusing or Internal Double Crux that provide insight in addition to produce interesting effects or produce results.
Is the better argument not that the wise ladies were onto something? Traditional medicines are a mixed bag, but some herbal remedies are truly effective and have since been integrated into scientific medicinal practices. Rather than inventing his own theoretical framework, Hippocrates would have been better-served by investigating the existing herbal practices and trying to identify the truly-effective from the placebo. Trial-and-error is a form of empiricism, after all—and this seems to be how cultural knowledge like herbal medicine came to be.
I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.
I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.
I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.
It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.
In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.
Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here’s hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.
I’ve never even heard of IAFF! What is that?
Edit: oops, cousin_it beat me to it.
The “Intelligent Agent Foundations Forum” at https://agentfoundations.org/.
It was a platform MIRI built for discussing their research, that required an invite to post/comment. There’s lots of really interesting stuff there—I remember enjoying reading Jessica Taylor’s many posts summarising intuitions behind different research agendas.
It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.
As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI’s blessing) imported all the old content. At some point we’ll set all the old links to redirect to the AI Alignment Forum too.
agentfoundations.org—lots of good stuff there, but most of it gets very few responses. The recently launched alignmentforum.org is an attempt to do the same thing, but with crossposting to LW.
Very much disagree—but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don’t think rationality works without some community.
First, I don’t think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.
Second, I don’t think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.
Are you saying that epistemic rationality didn’t exist before the LW community, or that (for instance) academia is an adequate community?
Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don’t there were places that actively advocated that members follow those epistemic standards.
To get back to the main point, I don’t think that it is necessary for the community to “fulfill each other’s needs for companionship, friendship, etc,” I don’t think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.
By whose epistemic standards? And what’s the evidence for the claim?
Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU’s economics department, and possibly the new center at Georgetown) I don’t think you’d find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.
I think your comment is unnecessarily hedged—do you think that you’d find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?
I think I understand the connotation of your statement, but it’d be easier to understand if you strengthened “sometimes” to a stronger statement about academia’s inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals—what is the actual claim that distinguishes the communities?
That’s a very good point, I was definitely unclear.
I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.
Oh, I so your complaint us about instrumental rationality. Well, naturally they’re bad at that. Most people are. You don’t get good at doing things by studying rationality in the abstract. EY couldn’t succeed in spending $300k of free money on producing software to his exact specifications.
I was thinking more of epistemic rationality, having given up on instrumental rationality.
I don’t think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.
And even of those who do preregister, noboby puts down their credence for the likilihood that there’s an effect.
This is a fine desire to have. I share it.
And herein lies your problem.
You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate:
Be friends with people who are awesome. Avoid people who suck.
Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable.
It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close.
Finally:
For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.
Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)
The thing is—and here I disagree with your initial comment thread as well—peer pressure is useful. It is spectacularly useful and spectacularly powerful.
How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox—or at least the most powerful tool that generalizes so easily—is to make more friends that already have those traits.
Can this go bad places? Of course it can. It’s a positive feedback cycle with no brakes save the ones we give it. But…
… well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.
(And ‘crowds of humans’, while kind of a pain to herd, are still much much easier than AI.)
You’re equivocating between the following:
To become more X, find a crowd of people who are more X.
To become more X, find a crowd of people who are trying to be more X.
Perhaps #1 works. But what is actually happening is #2.
… or at least, that’s what we might charitably hope is happening. But actually instead what often happens is:
To become more X, find a crowd of people who are pretending to try to be more X.
And that definitely doesn’t work.
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn’t help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.
And sure, 3 is indeed what often happens.
… First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.
Secondly, all shounen quips aside, it’s actually not that hard to tell when someone is merely pretending to be more X. It’s easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn’t staying away from the affective death spiral, it’s trying to find the people who are actually trying among them—the ones who, almost definitionally, are not talking nearly as much about it, because “slay the Buddha” is actually surprisingly general advice.
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
EDIT:
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
Right—they call it the “principle of charity.”
Fair. Nevertheless, if the average of the group is around my own level, that’s good enough for me if they’re also actively trying. (Pretty much by definition of the average, really...)
… Okay, sorry, two place function. I don’t seem to have much trouble distinguishing.
(And yes, you can reasonably ask how I know I’m right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but… well, at some point that all turns into wasted motions. Let’s just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I’m fairly confident I’ll at least not be easily mislead.)
I mostly agree with this, but want to point at something that your comment didn’t really cover, that “whether to go to the homeopath or the doctor” is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you’ve separated it into “what advice should I follow?” and “what advice is out there?”]
But this requires that the question of how to evaluate strategies be framed more in terms of “I used my judgment to weigh evidence” and less in terms of “I followed the prestige” or “I compared the lengths of their articulated justifications” or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 2000 will wrongly pick homeopathy, but ideally a rationalist would switch from homeopathy to doctors as the actual facts on the ground change.
This doesn’t mean a rationalist in 1820 should be satisfied with homeopathy; it should be known to them as a temporary plug to a hole in their map. But that also doesn’t mean it’s the most interesting or important hole in their map; probably then they’d be most interested in what’s up with electricity. [Similarly, today I’m somewhat confused about what’s going on with diet, and have some ‘reasonable’ guesses and some ‘woo’ guesses, but it’s clearly not the most interesting hole in my map.]
And so my sense is a rationalist in 2018 should know what they know, and what they don’t, and be scientific about things to the degree that they capture their curiosity (which relates both to ‘irregularities in the map’ and ‘practically useful’). Which is basically how I read your comment, except that you seem more worried about particular things than I am.
It sure seems like you should be able to do better than spending literally two thousand years. There are much better existing methodologies now than there were then.
I would be very vary with using him as an example because the public image of him is very much determined by the media.
He did succeed at getting a degree at Wharton Business school.
Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.
How do you know this?
One of the interviews on YouTube. I unfortunately don’t have a link right now.
Get out of the armchair and try the woo before you dismiss it.
You are not smarter than running an experiment of your own.