My wife and I are monthly donors, and here’s to CFAR having a great 2015! I’d also love to talk about potential collaborations between CFAR and Intentional Insights as we get our own infrastructure and internal operations set up well in the next month or two.
Gleb_Tsipursky
Optimizing the Twelve Virtues of Rationality
Celebrating All Who Are in Effective Altruism
Newsjacking for Rationality and Effective Altruism
Spreading rationality through engagement with secular groups
Collaborative Truth-Seeking
Review and Thoughts on Current Version of CFAR Workshop
The Value of Those in Effective Altruism
Why You Should Be Public About Your Good Deeds
Great progress, and I just donated! As a nonprofit director myself, I am especially happy to see your progress on systematization going forward. That’s what will help pave the path to long-term success. Great job!
I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
- 20 Nov 2015 5:04 UTC; 8 points) 's comment on Marketing Rationality by (
- 23 Nov 2015 6:46 UTC; 3 points) 's comment on Marketing Rationality by (
A Weird Trick To Manage Your Identity
[Link] Less Wrong and Agency in the Huffington Post
Promoting rationality to a broad audience—feedback on methods
Thank you for bringing this up as a topic of discussion! I’m really interested to see what the Less Wrong community has to say about this.
Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn’t happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.
I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.
The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: “6 Science-Based Hacks for Growing Mentally Stronger.” BTW, editors are usually the ones who write the headline, so I can’t “take the credit” for the click-baity nature of the title in most cases.
Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: “Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP.”
Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.
We also give interviews to prominent venues on rationality-informed topics: 1, 2.
Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the “About Us” page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.
Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.
Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that:
Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
Problematic decision making undermines mutual flourishing in a number of life areas.
These flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.
Here is the draft version of our Theory of Change.
Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.
The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.
Well, that’s all. Look forward to your thoughts! I’m always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)
EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb’s ClearerThinking and also about our audience engaging with Less Wrong such as Ella.
- Marketing Rationality by 18 Nov 2015 13:43 UTC; 39 points) (
- 23 Nov 2015 0:42 UTC; 10 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:38 UTC; 8 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:32 UTC; 8 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:11 UTC; 6 points) 's comment on Marketing Rationality by (
- 30 Nov 2015 19:48 UTC; 6 points) 's comment on Promoting rationality to a broad audience—feedback on methods by (
- 19 Nov 2015 2:15 UTC; 5 points) 's comment on Marketing Rationality by (
- 23 Nov 2015 6:26 UTC; 4 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 1:05 UTC; 3 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:49 UTC; 3 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:27 UTC; 2 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 0:45 UTC; 2 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 23:22 UTC; 1 point) 's comment on Marketing Rationality by (
- 19 Nov 2015 2:27 UTC; -1 points) 's comment on Marketing Rationality by (
- 19 Nov 2015 7:26 UTC; -4 points) 's comment on Marketing Rationality by (
Intentional Insights and the Effective Altruism Movement – Q & A
Rationality promoted by the American Humanist Association
I published an article in The Huffington Post promoting Givedirectly and effective giving, which was shared on social media over 2K times.
First, on a meta-note, since Anna was too humble to mention it herself, I want to highlight that the CFAR 2015 Winter Fundraiser will last through January 31, 2016, with every $2 donated matched by $1 from CFAR supporters. Just to be clear, for those who don’t know me, I’m not a staff person or Board member at CFAR, and am in fact the President of another organization spreading rationality and effective altruism to a broad audience, so with a somewhat distinct mission with CFAR, which targets, as Anna said, those elites who are in the strongest position to impact the world. However, I’m also a monthly donor to CFAR, and very much support the mission, and encourage you to donate to CFAR during this fundraiser, since your dollars will do a lot of good there.
Second, let me come down from meta, and speak from my CFAR donor hat. I’m curious to learn more about the target group of elites that you talk about Anna, namely those “who are most likely to actually usefully impact the world.” When I think of MIRI Summer Fellows, I totally get your point regarding AI research. But what about offering training to others such as aspiring politicians/bureaucrats who are likely to be in the position to make AI-relevant policies, and also policies that address short and medium-term existential risk in the next several of decades before the possibility of FAI becomes more tangible—existential risk like cyberwarfare, nuclear war, climate change, etc. If we can get politicians to be more sane about short, medium, and long-term existential risk, it seems like that would be a win-win scenario. What are CFAR’s thoughts on that?
Glad to do the survey, and appreciate that LW takes the views of readers seriously, that’s great!