New: use The Nonlinear Library to listen to the top LessWrong posts of all time

Update #1: It’s a rite of passage to binge the top LessWrong posts of all time, and now you can do it on your podcast app.

We (Nonlinear) made “top of all time” playlists for LessWrong, the EA Forum, and the Alignment Forum. Each is around ~400 of the most upvoted posts:

Update #2: The original Nonlinear Library feed includes top posts from the EA Forum, LessWrong, and the Alignment Forum. Now, by popular demand, you can get forum-specific feeds:

Stay tuned for more features. We’ll soon be launching channels by tag, so you can listen to specific subjects, such as longtermism, rationality, animal welfare, or global health. Enter your email here to get notified as we add more channels.

Below is the original explanation of The Nonlinear Library and its theory of change.


We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.

In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.

Listen here: Spotify, Google Podcasts, Pocket Casts, Apple, or elsewhere

Or, just search for it in your preferred podcasting app.

Goal: increase the number of people who read EA research

A koan: if your research is high quality, but nobody reads it, does it have an impact?

Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.

Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better

Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content.

This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.

Here are some purely hypothetical numbers just to illustrate this way of thinking:

Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs.

Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact.

You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing—that’s 100 readers. So you’ve created 100*1,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour—pretty good!

But if you were to spend, say 1 hour, promoting your report - for example, by posting links on EA-related Facebook groups—to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research.

Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact—at virtually no cost, since it’s fully automated.

In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you.

Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers.

Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high.

Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section.

Why it’s useful

EA needs more audio content

EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format.

There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes.

There are a lot of listeners

The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners.

Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things.

Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts—especially if you’re listening at 2-3x speed—and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests.

Existing text-to-speech solutions are sub-optimal

We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:

  • They are glitchy, frequently crashing, losing your spot, failing at handling formatting edge cases, etc.

  • Their playlists don’t work or exist, so you’ll pause every 2-7 minutes to pick a new article to read, making it awkward to use during commutes, workouts, or chores. Or maybe you can’t change the order, like with Pocket, which makes it unusable for many.

  • They’re platform specific, forcing you to download yet another app, instead of, say, the podcast app you already use.

  • Pause buttons on headphones don’t work, making it exasperating to use when you’re being interrupted frequently.

  • Their UI is bad, requiring you to constantly fiddle around with the settings.

  • They don’t automatically add new posts. You have to do it manually, thus often missing important updates.

  • They use old, low-quality voices, instead of the newer, way better ones. Voices have improved a lot in the last year.

  • They cost money, creating yet another barrier to the content.

  • They limit you to 2x speed (at most), and their original voices are slower than most human speech, so it’s more like 1.75x. This is irritating if you’re used to faster speeds.

In the end, this leads to only the most motivated people using the services, leaving out a huge percentage of the potential audience. (2)

How The Nonlinear Library fixes these problems

To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds.

We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service.

Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource.

Why not have a human read the content?

The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software.

On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%.

Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities.

Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI.

On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer.

Future Playlists (“Bookshelves”)

There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:

  • Top of all time playlists: a playlist of the top 300 upvoted posts of all time on the EA Forum, one for LessWrong, etc. This allows people to binge all of the best content EA has put out over the years. Depending on their popularity, we will also consider setting up top playlists by year or by topic. As the library grows we’ll have the potential to have even larger lists as well.

  • Playlists by topic (or tag): a playlist for biosecurity, one for animal welfare, one for community building, etc.

  • Playlists by forum: one for the EA Forum, one for LessWrong, etc.

  • Archives. Our current model focuses on turning new content into audio. However, there is a substantial backlog of posts that would be great to convert.

  • Org specific podcasts. We’d be happy to help EA organizations set up their own podcast version of their content. Just reach out to us.

  • Other? Let us know in the comments if there are other sources or topics you’d like covered.

Who we are

We’re Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us.

Footnotes

(1) Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves.


(2) For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.

(3) The current upvote thresholds for which articles are converted are:
25 for the EA forum
30 for LessWrong
No threshold for the Alignment Forum due to low volume

This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels.