I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there’s maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there’s maybe a 10-20% chance that he’s having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.
So here’s some of the concerns I see; I’ve gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:
By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
By teaching people using arguments from authority, he may be worsening the primary “sanity waterline” issues rather than improving them. The articles, materials, and comments I’ve seen make heavy use of language like “science-based”, “research-based” and “expert”. The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he’s spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
Gleb’s writing style strikes me as very unauthentic feeling. Let me be clear I don’t mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating “rationality” with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.
An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying “Oh god don’t give me that ‘6 weird tips’ bullshit about ‘rational thinking’, and spare me your godawful rhetoric, gtfo.”
Like I said at the start, I don’t know which way it swings, but those are my thoughts and concerns. I imagine they’re not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I’m not sure of a good way to collect evidence about this besides running absurdly large long-term “consumer” studies.
I do imagine he plans to continue his efforts, and thus we’ll find out eventually how this turns out.
I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
I’ll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset.
I would note that the otherwise-awesome Adventures in Cognitive Biases’ calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I’ve introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
Effectively no. I understand that you’re aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you’ve just said aren’t different in gestalt from what I’ve read from you.
To be potentially more helpful, here’s a few ways the arguments you just made fall flat for me:
I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
Connectivity to the rationalist movement or “rationality” keyword isn’t necessary to immunize people against the ideas. You’re right that if you literally never use the word “bias” then it’s unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word “bias”, but if they respond the same way to the phrase “thinking errors” or realize at some point that’s the concept I’m talking about, it’s the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.
For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I can’t find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don’t know if I understand what you’re trying to say; to reflect back, are you saying something like “People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value.” ? I … don’t know about that, but I won’t critique that position here since I may not be understanding.
(...) Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it’s not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.
You’re clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.
use every deviation from perfection as ammunition against even fully correct forms of good ideas.
As a professional educator and communicator, I have a deep visceral experience with how “fully correct forms of good ideas” are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.
This is why it’s not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.
To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the “fully correct forms of good ideas.”
This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it’s worth their time and energy.
This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here
I can’t find any discussion in the linked article about why research is a key way of validating truth claims
The article doesn’t discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:
Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood!
This discussion of a study as validating the truth claim proposition of “improving mood=higher willpower” demonstrates—not tells but shows—the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.
Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I’ll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.
what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends
I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.
I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?
I would argue that your first and third points are not very strong.
I think that it is not useful to protect an idea so that it is only presented in its ‘cool’ form. A lot of harm is done by people presenting good ideas badly, and we don’t want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood.
People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle.
I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer’s words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources.
I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.
Agreed with presenting them to intro-level means, so that there is less of an inference gap.
Good idea on subtly introducing readers to more and better sites for further and better reading, updating on this to do so more often in my articles. Thanks!
This comment captures my intutions well. Thanks for writing this. It’s weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb’s work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that’s easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that’s a good or ill effect is a judgment I’ll leave for you.
On the other hand, when I put on my rationality community hat, I feel the same way about Gleb’s work as you do. It’s uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.
An important way I think about my work in the rationality sphere is cognitive altruism.
In a way, it’s not different than effective altruism. When promoting effective giving, I encourage people to think rationally about their giving. I pose to them the question of how (and whether) they currently think about their goals in giving, the impact of their giving, and the quality of the charities to which they give, encouraging them to use research-based evaluations from GiveWell, TLYCS, etc. The result is that they give to effective charities.
Similarly, I encourage people to think rationally about their life and goals in my promotion of rationality. The results is that they make better decisions about their lives and are more capable of meeting their goals, including being more long-term oriented and thus fighting the Moloch problem. For example, here is what one person got out of my book on finding meaning and purpose by orienting toward one’s long-term goals. He is now dedicated to focusing his life on helping other people have a good life, in effect orienting toward altruism.
In both cases, I take the rational approach of using methods from content marketing that have been shown to work effectively in meeting the goals of spreading complex information to broad audiences. It’s not different in principle.
I’m curious whether this information helps you update one way or another in your assessment of Intentional Insights.
By teaching people using arguments from authority, he may be worsening the primary “sanity waterline” issues rather than improving them. The articles, materials, and comments I’ve seen make heavy use of language like “science-based”, “research-based” and “expert”. The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he’s spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
My immediate reaction was to disagree. I think most people don’t listen to arguments from authority often enough; not too often. So I decided to search “arguments from authority” on LessWrong, and the first thing I came to was this article by Anna Salamon:
Another candidate practice is the practice of only passing on ideas one has oneself verified from empirical evidence (as in the ethic of traditional rationality, where arguments from authority are banned, and one attains virtue by checking everything for oneself). This practice sounds plausibly useful against group failure modes where bad ideas are kept in play, and passed on, in large part because so many others believe the idea (e.g. religious beliefs, or the persistence of Aristotelian physics in medieval scholasticism; this is the motivation for the scholarly norm of citing primary literature such as historical documents or original published experiments). But limiting individuals’ sharing to the (tiny) set of beliefs they can themselves check sounds extremely costly.
She then suggests separating out knowledge you have personally verified from arguments from authority knowledge to avoid groupthink, but this doesn’t seem to me to be a viable method for the majority of people. I’m not sure it matters if non-experts engage in groupthink if they’re following the views of experts who don’t engage in groupthink.
Skimming the comments, I find that the response to AnnaSalamon’s article was very positive, but the response to your opposite argument in this instance also seems to be very positive. In particular, AnnaSalamon argues that the share of knowledge which most people can or should personally verify is tiny relative to what they should learn. I agree with her view. While I recognize that there are different people responding to AnnaSalamon’s comments than the one’s responding to your comments, I fear that this may be a case of many members of LessWrong interpreting arguments based on presentation or circumstance rather than on their individual merits.
I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there’s maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there’s maybe a 10-20% chance that he’s having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.
So here’s some of the concerns I see; I’ve gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:
By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
By teaching people using arguments from authority, he may be worsening the primary “sanity waterline” issues rather than improving them. The articles, materials, and comments I’ve seen make heavy use of language like “science-based”, “research-based” and “expert”. The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he’s spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
Gleb’s writing style strikes me as very unauthentic feeling. Let me be clear I don’t mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating “rationality” with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.
An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying “Oh god don’t give me that ‘6 weird tips’ bullshit about ‘rational thinking’, and spare me your godawful rhetoric, gtfo.”
Like I said at the start, I don’t know which way it swings, but those are my thoughts and concerns. I imagine they’re not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I’m not sure of a good way to collect evidence about this besides running absurdly large long-term “consumer” studies.
I do imagine he plans to continue his efforts, and thus we’ll find out eventually how this turns out.
I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
One idea is to try to teach your audience about overconfidence first, e.g. the way this game does with the calibration questions up front. See also.
Nice idea! Thanks for the suggestion. Maybe also a Caplan Test.
I’ll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset.
I would note that the otherwise-awesome Adventures in Cognitive Biases’ calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I’ve introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.
Thanks!
Effectively no. I understand that you’re aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you’ve just said aren’t different in gestalt from what I’ve read from you.
To be potentially more helpful, here’s a few ways the arguments you just made fall flat for me:
Connectivity to the rationalist movement or “rationality” keyword isn’t necessary to immunize people against the ideas. You’re right that if you literally never use the word “bias” then it’s unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word “bias”, but if they respond the same way to the phrase “thinking errors” or realize at some point that’s the concept I’m talking about, it’s the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.
I can’t find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don’t know if I understand what you’re trying to say; to reflect back, are you saying something like “People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value.” ? I … don’t know about that, but I won’t critique that position here since I may not be understanding.
You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it’s not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.
You’re clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.
As a professional educator and communicator, I have a deep visceral experience with how “fully correct forms of good ideas” are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.
This is why it’s not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.
To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the “fully correct forms of good ideas.”
This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it’s worth their time and energy.
This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here
The article doesn’t discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:
This discussion of a study as validating the truth claim proposition of “improving mood=higher willpower” demonstrates—not tells but shows—the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.
Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I’ll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.
I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.
I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?
EDIT: added link to my other comment
EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.
I would argue that your first and third points are not very strong.
I think that it is not useful to protect an idea so that it is only presented in its ‘cool’ form. A lot of harm is done by people presenting good ideas badly, and we don’t want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood.
People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle.
I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer’s words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources.
I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.
Agreed with presenting them to intro-level means, so that there is less of an inference gap.
Good idea on subtly introducing readers to more and better sites for further and better reading, updating on this to do so more often in my articles. Thanks!
This comment captures my intutions well. Thanks for writing this. It’s weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb’s work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that’s easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that’s a good or ill effect is a judgment I’ll leave for you.
On the other hand, when I put on my rationality community hat, I feel the same way about Gleb’s work as you do. It’s uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.
An important way I think about my work in the rationality sphere is cognitive altruism.
In a way, it’s not different than effective altruism. When promoting effective giving, I encourage people to think rationally about their giving. I pose to them the question of how (and whether) they currently think about their goals in giving, the impact of their giving, and the quality of the charities to which they give, encouraging them to use research-based evaluations from GiveWell, TLYCS, etc. The result is that they give to effective charities.
Similarly, I encourage people to think rationally about their life and goals in my promotion of rationality. The results is that they make better decisions about their lives and are more capable of meeting their goals, including being more long-term oriented and thus fighting the Moloch problem. For example, here is what one person got out of my book on finding meaning and purpose by orienting toward one’s long-term goals. He is now dedicated to focusing his life on helping other people have a good life, in effect orienting toward altruism.
In both cases, I take the rational approach of using methods from content marketing that have been shown to work effectively in meeting the goals of spreading complex information to broad audiences. It’s not different in principle.
I’m curious whether this information helps you update one way or another in your assessment of Intentional Insights.
My immediate reaction was to disagree. I think most people don’t listen to arguments from authority often enough; not too often. So I decided to search “arguments from authority” on LessWrong, and the first thing I came to was this article by Anna Salamon:
She then suggests separating out knowledge you have personally verified from arguments from authority knowledge to avoid groupthink, but this doesn’t seem to me to be a viable method for the majority of people. I’m not sure it matters if non-experts engage in groupthink if they’re following the views of experts who don’t engage in groupthink.
Skimming the comments, I find that the response to AnnaSalamon’s article was very positive, but the response to your opposite argument in this instance also seems to be very positive. In particular, AnnaSalamon argues that the share of knowledge which most people can or should personally verify is tiny relative to what they should learn. I agree with her view. While I recognize that there are different people responding to AnnaSalamon’s comments than the one’s responding to your comments, I fear that this may be a case of many members of LessWrong interpreting arguments based on presentation or circumstance rather than on their individual merits.