My wife and I are monthly donors, and here’s to CFAR having a great 2015! I’d also love to talk about potential collaborations between CFAR and Intentional Insights as we get our own infrastructure and internal operations set up well in the next month or two.
Gleb_Tsipursky
Great progress, and I just donated! As a nonprofit director myself, I am especially happy to see your progress on systematization going forward. That’s what will help pave the path to long-term success. Great job!
I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
- 20 Nov 2015 5:04 UTC; 8 points) 's comment on Marketing Rationality by (
- 23 Nov 2015 6:46 UTC; 3 points) 's comment on Marketing Rationality by (
Thank you for bringing this up as a topic of discussion! I’m really interested to see what the Less Wrong community has to say about this.
Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn’t happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.
I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.
The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: “6 Science-Based Hacks for Growing Mentally Stronger.” BTW, editors are usually the ones who write the headline, so I can’t “take the credit” for the click-baity nature of the title in most cases.
Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: “Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP.”
Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.
We also give interviews to prominent venues on rationality-informed topics: 1, 2.
Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the “About Us” page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.
Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.
Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that:
Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
Problematic decision making undermines mutual flourishing in a number of life areas.
These flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.
Here is the draft version of our Theory of Change.
Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.
The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.
Well, that’s all. Look forward to your thoughts! I’m always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)
EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb’s ClearerThinking and also about our audience engaging with Less Wrong such as Ella.
- Marketing Rationality by 18 Nov 2015 13:43 UTC; 39 points) (
- 23 Nov 2015 0:42 UTC; 10 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:38 UTC; 8 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:32 UTC; 8 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:11 UTC; 6 points) 's comment on Marketing Rationality by (
- 30 Nov 2015 19:48 UTC; 6 points) 's comment on Promoting rationality to a broad audience—feedback on methods by (
- 19 Nov 2015 2:15 UTC; 5 points) 's comment on Marketing Rationality by (
- 23 Nov 2015 6:26 UTC; 4 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 1:05 UTC; 3 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:49 UTC; 3 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:27 UTC; 2 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:45 UTC; 2 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:22 UTC; 1 point) 's comment on Marketing Rationality by (
- 19 Nov 2015 2:27 UTC; -1 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 7:26 UTC; -4 points) 's comment on Marketing Rationality by (
I published an article in The Huffington Post promoting Givedirectly and effective giving, which was shared on social media over 2K times.
First, on a meta-note, since Anna was too humble to mention it herself, I want to highlight that the CFAR 2015 Winter Fundraiser will last through January 31, 2016, with every $2 donated matched by $1 from CFAR supporters. Just to be clear, for those who don’t know me, I’m not a staff person or Board member at CFAR, and am in fact the President of another organization spreading rationality and effective altruism to a broad audience, so with a somewhat distinct mission with CFAR, which targets, as Anna said, those elites who are in the strongest position to impact the world. However, I’m also a monthly donor to CFAR, and very much support the mission, and encourage you to donate to CFAR during this fundraiser, since your dollars will do a lot of good there.
Second, let me come down from meta, and speak from my CFAR donor hat. I’m curious to learn more about the target group of elites that you talk about Anna, namely those “who are most likely to actually usefully impact the world.” When I think of MIRI Summer Fellows, I totally get your point regarding AI research. But what about offering training to others such as aspiring politicians/bureaucrats who are likely to be in the position to make AI-relevant policies, and also policies that address short and medium-term existential risk in the next several of decades before the possibility of FAI becomes more tangible—existential risk like cyberwarfare, nuclear war, climate change, etc. If we can get politicians to be more sane about short, medium, and long-term existential risk, it seems like that would be a win-win scenario. What are CFAR’s thoughts on that?
Nancy, thank you for the hard work you do and the tough calls you have to make. The admin’s job is a lonely one, and not sufficiently appreciated. As someone who has done and is currently doing lots of admin stuff, I know that from personal experience. So thank you!
Did anybody do any rationality-themed body modifications? I recently got a rationality-themed tattoo, and so have some other folks I know. I was curious about what other Less Wrongers do.
Thanks for clarifying the deletion history, much appreciated.
From my own perspective, I do feel attacked, by someone who has also engaged in ad hominem attacks against me and likely sock puppetry. It’s been a pretty negative experience, and I’m trying to treat is as a “comfort zone expansion” opportunity.
I’d welcome you rewriting the wiki article since it seems that your comment received a lot of upvotes, indicating community support for your perspective.
Nope, they actually influenced me—they both got theirs before I did.
They are members of my Less Wrong meetup, and some of them volunteer for InIn, while others don’t.
For instance, one person has a tattoo of an elephant and a rider, with the elephant breaking the chain of an anchor. Another has a Bayesian math-themed tattoo.
I want to note that Azathoth123, the other name for Eugene_Neir previously negatively engaged with Intentional Insights, and that my karma went from 1009 to 838 after VoiceOfRa began criticizing me several days ago.
My overall updating from this thread has been:
Learning a lot more about the diversity of opinions and concerns among Less Wrongers.
1) Learning that there are a lot more risk-averse people on LW who are opposed to experimenting with new things, learning from experience, improving going forward, and optimizing the world, than I had previously thought.
2) Learned a lot about Less Wrongers’ “ew” experiences and flinching away from [modern marketing], despite some getting it
3) Learned that many Less Wrongers are strongly oriented toward perfectionism and bulletproof arguments at the expense of clarity and bridging inference gaps.
4) Surprised to see positive updates on my character (1, 2)as the result of this discussion, and will pay more attention to issues of character in the future—I think I paid too much attention to content previously and insufficient attention to character.
Updated toward some different strategies with Intentional Insights
1) Orienting Intentional Insights content more toward providing breadcrumbs of links toward more higher-quality materials than the people on Lifehack and The Huffington Post are currently reading
2) Teaching our audience about the dangers of overconfidence sooner.
3) Taking more concrete steps to minimize the risk of Endless September and tainting the term “rationality” by decreasing mentions of Less Wrong and rationality in our content.
4) Being more clear and specific in communicating scientific thinking to our audiences.
5) Learned more about The Virtue of Silence and need to keep this virtue in mind.
6) Learned to consider more the trade-offs of using and simplifying certain terms and concepts
7) Updating more toward taking well-considered action despite opposition, and avoiding falling into status-quo bias and information bias.
8) Stopping unproductive conversations sooner
9) Overall, I need to focus more on striving to learn things even from highly negative feedback, and avoid the instinct to flinch away or swing back. This is my aspiration, and I did not always succeed in the course of this discussion. However, I believe this experience will help me grow stronger in this domain.
Thanks all for your participation. As you see, you all taught me something. I appreciate you revealing your mental maps to the extent you chose to do so, and now my territory is clearer. My gratitude to you.
EDIT: Edited for formatting, the bullet points did not come out right away.
use every deviation from perfection as ammunition against even fully correct forms of good ideas.
As a professional educator and communicator, I have a deep visceral experience with how “fully correct forms of good ideas” are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.
This is why it’s not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.
To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the “fully correct forms of good ideas.”
This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it’s worth their time and energy.
This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here
I can’t find any discussion in the linked article about why research is a key way of validating truth claims
The article doesn’t discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:
Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood!
This discussion of a study as validating the truth claim proposition of “improving mood=higher willpower” demonstrates—not tells but shows—the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.
Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I’ll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.
what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends
I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.
I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?
EDIT: added link to my other comment
- 25 Nov 2015 2:41 UTC; 10 points) 's comment on Marketing Rationality by (
Also good to keep in mind this article by Danny Kahneman: “Why Moving to California Won’t Make You Happy”.
BTW, sad to see this post downvoted, pretty good post.
I tend to think portraying EA as “nascent” is appealing to more readers than not describing it that way. This is for two reasons.
First, many readers will be attracted by the possibility of becoming part of the cool new thing, especially the younger and intellectually-oriented readers, who are more likely to convert to EA causes, since the EA movement swings younger and more intellectual than the general population.
Second, describing the EA movement as “nascent” is accurate, in terms of the number of people who identify with EA (as a rough heuristic, the main EA FB group has under 10K members). So just describing it as a movement without identifying it as a small movement might be perceived by those researching the topic after reading the article as disingenuous.
How/whether to do rationality outreach
Updated on the benefits of signing up for cryonics, thanks!
For how many times the article itself was shared, Lifehack has that prominently displayed on their website. Then, we use Google Analytics, which gives us information on how many people visited out website from Lifehack itself. We can’t track them further than that. If you have ideas about how to track them further, especially using free software, I’d be interested in learning about that!
Glad to do the survey, and appreciate that LW takes the views of readers seriously, that’s great!