I am not sure of the point here. I read it as “I can imagine a perfect world and LW is not it”. Well, duh.
There are also a lot of words (like “wrong”) that the OP knows the meaning of, but I do not. For example, I have no idea what are “wrong opinions” which, apparently, rational discussions have a tendency to support. Or what is that “high relevancy” of missing articles—relevancy to whom?
And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?
The article mixes together examples of imperfection of the world with specific problems of LW.
God, grant me the serenity to accept the things I cannot change, The courage to change the things I can, And the wisdom to know the difference.
We could try to be more specific about things related to us, and how we could solve them. For example:
Cost-benefit analysis
Applies to our celebrities, too. As a reader, I would love to read a new Sequence written by Eliezer, or other impressive people in our community. However, it may not be the best use of their time. Writing an article can take a lot of time and energy.
Gwern’s comment suggests the solution: have someone else write the article. A person who is sufficiently rational and good at writing, and lives in the Bay Area, could take the role of a “rationalist community journalist”. I imagine their work could be spending time with important rationalists, making notes, writing the articles, having them reviewed by the relevant people, and publishing them on LW.
WEIRD
Relevant article: “Black People Less Likely”. Also, as far as I know, the traditional advice for groups with overwhelmingly western educated rich white male membership is to put a clique of western educated rich white feminists in the positions of power. Which creates its own problems, namely that the people with newly gained power usually give zero fucks about the group’s survival or its original mission, and focus on spreading their political memes.
This said, I support the goal of bringing more people from different backgrounds to the rationalist community (as long as the people are rational, of course). I object against the traditional methods of doing it, because those methods often fail to reach the goal.
I suspect that the fact that LessWrong is an online community using English language already contributes heavily to readers more likely being western, educated, rich (you need to be good at English, have a lot of free time, have a good internet connection). Whiteness correlates with being western and rich. There is a gender imbalance in STEM, in the general society outside LessWrong. -- All these filters are applied before any content was written. Which of course doesn’t mean the content couldn’t add another filter in the same direction.
Here are some quick ideas that could help: Create rationality materials in paper form (for people who can’t afford to spend hundreds of hours online). Translate those materials in other languages (for people not fluent in English). Maybe create materials for different audiences; e.g. a reduced version for people without good mathematical education.
Funny thing, age wasn’t mentioned in the original list of complaints, and I believe it can play an important role. Specifically the fact that many rationalists have spent their whole lives at school. -- For example, it’s ridiculous to see how many people self-identify as “effective altruists” while saying “okay I’m still at school without an income, so I actually didn’t send a penny yet, but when I grow up and get a job I totally will give as much as possible”. Nice story, bro! Maybe I could tell you how I imagined my future while I was at school; then we can laugh together. So far, you are merely a fan of effective altruism. When you start spending the rest of your life making money only to donate it to someone poorer than you, then you become an effective altruist. If you can keep doing it while feeding your children and paying for the roof above your heads, then you are hardcore. Right now, you just dilute the meaning of the word.
We already have a reddit-style forum. Why would you want to go back to the old model where a few people (journalists) only provide content and the dumb masses only consume it?
traditional advice for groups with overwhelmingly western educated rich white male membership is to put a clique of western educated rich white feminists in the positions of power
...traditional?? We, um, come from different traditions, I guess X-/ I know that’s what feminists want, but that doesn’t make it a tradition.
So far, you are merely a fan of effective altruism.
Yep, true. Though, to be fair, EA isn’t about how much you give, it’s about to what you give.
I am not sure of the point here. I read it as “I can imagine a perfect world and LW is not it”. Well, duh.
No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.
There are also a lot of words (like “wrong”) that the OP knows the meaning of, but I do not. For example, I have no idea what are “wrong opinions” which, apparently, rational discussions have a tendency to support. Or what is that “high relevancy” of missing articles—relevancy to whom?
High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC’s Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.
Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.
And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?
No. Nash probably wouldn’t cooperate, even though he understood game theory and I wouldn’t blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).
a perfect world is difficult to achieve … most also would not have expected for Wikipedia to work out as well as it does
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance.
Sure. But this is an observation about your mind, not about LW.
High relevancy to the reader who is an aspiring rationalist.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
The discussion of AI mostly end, where they become interesting.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.”
I agree, that would be a bad thing.
Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.
None of us could “enforce a police state”. It’s barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about “respected rational essayists” endorsing this plan also seems foolish.
“Surveillance” has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.
I’m trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it’s common to reject conclusions if weak arguments are added in support of them, this isn’t actually fair. But I see nothing to justify your summary.
Everything Lumifer said, plus this: all this marketing/anti-marketing drama seems to be predicated upon the notion that there exists a perfect rational world / community / person. No such thing though: LW itself shows that even a rationalist attire is better than witch hunting (the presupposition of course is that LWers have rationality as their tribe flag and are not especially more rational than the average people).
I do not think that there exists a perfect rational world. My next article will emphasize that. I do think that there is a rational attire which is on average more consistent than the average one presented on LW and one should strive for it. I did not get the point of your presupposition though it seems obvious to you, LWers are not more rational?
I am not sure of the point here. I read it as “I can imagine a perfect world and LW is not it”. Well, duh.
There are also a lot of words (like “wrong”) that the OP knows the meaning of, but I do not. For example, I have no idea what are “wrong opinions” which, apparently, rational discussions have a tendency to support. Or what is that “high relevancy” of missing articles—relevancy to whom?
And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?
The article mixes together examples of imperfection of the world with specific problems of LW.
We could try to be more specific about things related to us, and how we could solve them. For example:
Applies to our celebrities, too. As a reader, I would love to read a new Sequence written by Eliezer, or other impressive people in our community. However, it may not be the best use of their time. Writing an article can take a lot of time and energy.
Gwern’s comment suggests the solution: have someone else write the article. A person who is sufficiently rational and good at writing, and lives in the Bay Area, could take the role of a “rationalist community journalist”. I imagine their work could be spending time with important rationalists, making notes, writing the articles, having them reviewed by the relevant people, and publishing them on LW.
Relevant article: “Black People Less Likely”. Also, as far as I know, the traditional advice for groups with overwhelmingly western educated rich white male membership is to put a clique of western educated rich white feminists in the positions of power. Which creates its own problems, namely that the people with newly gained power usually give zero fucks about the group’s survival or its original mission, and focus on spreading their political memes.
This said, I support the goal of bringing more people from different backgrounds to the rationalist community (as long as the people are rational, of course). I object against the traditional methods of doing it, because those methods often fail to reach the goal.
I suspect that the fact that LessWrong is an online community using English language already contributes heavily to readers more likely being western, educated, rich (you need to be good at English, have a lot of free time, have a good internet connection). Whiteness correlates with being western and rich. There is a gender imbalance in STEM, in the general society outside LessWrong. -- All these filters are applied before any content was written. Which of course doesn’t mean the content couldn’t add another filter in the same direction.
Here are some quick ideas that could help: Create rationality materials in paper form (for people who can’t afford to spend hundreds of hours online). Translate those materials in other languages (for people not fluent in English). Maybe create materials for different audiences; e.g. a reduced version for people without good mathematical education.
Funny thing, age wasn’t mentioned in the original list of complaints, and I believe it can play an important role. Specifically the fact that many rationalists have spent their whole lives at school. -- For example, it’s ridiculous to see how many people self-identify as “effective altruists” while saying “okay I’m still at school without an income, so I actually didn’t send a penny yet, but when I grow up and get a job I totally will give as much as possible”. Nice story, bro! Maybe I could tell you how I imagined my future while I was at school; then we can laugh together. So far, you are merely a fan of effective altruism. When you start spending the rest of your life making money only to donate it to someone poorer than you, then you become an effective altruist. If you can keep doing it while feeding your children and paying for the roof above your heads, then you are hardcore. Right now, you just dilute the meaning of the word.
We already have a reddit-style forum. Why would you want to go back to the old model where a few people (journalists) only provide content and the dumb masses only consume it?
...traditional?? We, um, come from different traditions, I guess X-/ I know that’s what feminists want, but that doesn’t make it a tradition.
Yep, true. Though, to be fair, EA isn’t about how much you give, it’s about to what you give.
No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.
High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC’s Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity. Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.
No. Nash probably wouldn’t cooperate, even though he understood game theory and I wouldn’t blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).
A perfect world is, of course, impossible to achieve (not to mention that what’s perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.
Sure. But this is an observation about your mind, not about LW.
“Aspiring rationalist” is a content-free expression. It tells me nothing about what you consider “wrong” or “relevant”.
Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.
For example, I don’t find the question of “shall we enforce a police state” interesting. The answer is “No”, case closed, we’re done. Notice that I’m speaking about myself—you, being a different person, might well be highly interested in extended discussion of the topic.
Yeah, sure, you go join an Animal Liberation Front of some sorts, but what’s particularly interesting or rational about it? It’s a straightforward consequences of the values you hold.
I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word “interesting” was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: “Discussions of AI sometimes end, where they have serious implications regarding real life.” Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won’t make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.
I agree, that would be a bad thing.
Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the ’net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.
In general, I don’t find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.
None of us could “enforce a police state”. It’s barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about “respected rational essayists” endorsing this plan also seems foolish.
“Surveillance” has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.
I’m trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it’s common to reject conclusions if weak arguments are added in support of them, this isn’t actually fair. But I see nothing to justify your summary.
Everything Lumifer said, plus this: all this marketing/anti-marketing drama seems to be predicated upon the notion that there exists a perfect rational world / community / person. No such thing though: LW itself shows that even a rationalist attire is better than witch hunting (the presupposition of course is that LWers have rationality as their tribe flag and are not especially more rational than the average people).
I do not think that there exists a perfect rational world. My next article will emphasize that. I do think that there is a rational attire which is on average more consistent than the average one presented on LW and one should strive for it. I did not get the point of your presupposition though it seems obvious to you, LWers are not more rational?