Prioritization Research for Advancing Wisdom and Intelligence

Link post

LessWrong note: I wrote this more in a way slightly more optimized for the EA Forum than LessWrong, because the post seemed slightly more appropriate there.

Summary

I think it makes sense for Effective Altruists to pursue prioritization research to figure out how best to improve the wisdom and intelligence[1] of humanity. I describe endeavors that would optimize for longtermism, though similar research efforts could make sense for other worldviews.

The Basic Argument

For those interested in increasing humanity’s long-term wisdom and intelligence[1], several types of wildly different interventions are options on the table. For example, we could improve at teaching rationality, or we could make progress on online education. We could make forecasting systems and data platforms. We might even consider something more radical, like brain-computer interfaces or highly advanced pre-AGI AI systems.

These interventions share many of the same benefits. If we figure out ways to remove people’s cognitive biases, causing them to make better political decisions, that would be similar to the impact of forecasting systems on their political decisions. It seems natural to attempt to figure out how to compare these. We wouldn’t want to invest a lot of resources into one field, to realize 10 years later that we could have spent them better in another. This prioritization is pressing because Effective Altruists are currently scaling up work in several relevant areas (rationality, forecasting, institutional decision making) but mostly ignoring others (brain-computer interfaces, fundamental internet improvements).

The point of this diagram is that all of the various interventions on the left could contribute to helping humanity gain wisdom and intelligence. Different interventions produce other specific benefits as well, but these are more idiosyncratic in comparison. The benefits that come via the intermediate node of wisdom and intelligence can be directly compared between interventions.

In addition to caring about prioritization between cause areas, we should also care about estimating the importance of wisdom and intelligence work as a whole. Estimating the importance of wisdom and intelligence gains is crucial for multiple interventions, so it doesn’t make much sense to ask each intervention’s research base to independently tackle this question on their own. Previously I’ve done a lot of thinking about this as part of my work to estimate the value of my own work on forecasting. It felt a bit silly to have to answer this bigger question about wisdom and intelligence, like the bigger question was far outside actual forecasting research.

I think we should consider doing serious prioritization research around wisdom and intelligence for longtermist reasons.[2] This work could both inform us of the cost-effectiveness of all of the available options as a whole, and help us compare directly between different options.

Strong prioritization research between different interventions around wisdom and intelligence might at first seem daunting. There are so clearly many uncertainties and required judgment calls. We don’t even have any good ways of measuring wisdom and intelligence at this point.

However, I think the Effective Altruist and Rationalist communities would prove up to the challenge. GiveWell’s early work drew skepticism for similar reasons. It took a long time for Quality-Adjusted Life Years to be accepted and adopted, but there’s since been a lot of innovative and educational progress. Now our communities have the experience of hundreds of research person-years of prioritization work. We have at least a dozen domain-specific prioritization projects[3]. Maybe prioritization work in wisdom and intelligence isn’t far off.

List of Potential Interventions

I brainstormed an early list of potential interventions with examples of existing work. I think all of these could be viable candidates for substantial investment.

  • Human/​organizational

    • Rationality-related research, marketing, and community building (CFAR, Astral Codex Ten, LessWrong, Julia Galef, Clearer Thinking)

    • Institutional decision making

    • Academic work in philosophy and cognitive science (GPI, FHI)

    • Cognitive bias research (Kahneman and Tversky)

    • Research management and research environments (for example, understanding what made Bell Labs work)

  • Cultural/​political

    • Freedom of speech, protections for journalists

    • Liberalism (John Locke, Voltaire, many other intellectuals)

    • Epistemic Security (CSER)

    • Epistemic Institutions

  • Software/​quantitative

    • Positive uses of AI for research, pre-AGI (Ought)

    • Tools for thought” (note-taking, scientific software, collaboration)

    • Forecasting platforms (Metaculus, select Rethink Priorities research)

    • Data infrastructure & analysis (Faunalytics, IDInsight)

    • Fundamental improvements in the internet /​ cryptocurrency

    • Education innovations (MOOCs, YouTube, e-books)

  • Hardware/​medical

    • Lifehacking/​biomedical (nootropics, antidepressants, air quality improvements, light therapy, quantified self)

    • Genetic modifications (Cloning, Embryo selection)

    • Brain-computer interfaces (Kernel, Neuralink)

    • Digital people (FHI, Age of Em)

Key Claims

To summarize and clarify, here are a few claims that I believe. I’d appreciate insightful pushback for those who are skeptical of any.

  1. “Wisdom and intelligence” (or something very similar) is a meaningful and helpful category.

  2. Prioritization research can meaningfully compare different wisdom and intelligence interventions.

  3. Wisdom and intelligence prioritization research is likely tractable, though challenging. It’s not dramatically more difficult than global health or existential risk prioritization.

  4. Little of this prioritization work has been done so far, especially publicly.

  5. Wisdom and intelligence interventions are promising enough to justify significant work in prioritization.

Open Questions

This post is short, and of course, leaves open a bunch of questions. For example,

  1. Does “wisdom and intelligence” really represent a tractable idea to organize prioritization research around? What other options might be superior?

  2. Would wisdom and intelligence prioritization efforts face any unusual challenges or opportunities? (This would help us craft these efforts accordingly.)

  3. What specific research directions might wisdom and intelligence prioritization work investigate? For example, it could be vital to understand how to quantify group wisdom and intelligence.

  4. How might Effective Altruists prioritize this sort of research? Or, how would it rank on the ITN framework?

  5. How promising should we expect the best identifiable interventions in wisdom and intelligence to be? (This related to the previous question)

I intend to write about some of these later. But, for now, I’d like to allow others to think about them without anchoring.

There’s some existing work advocating for broad interventions in wisdom and intelligence, and there’s existing work on the effectiveness of particular interventions. I’m not familiar with existing research in inter-cause prioritization (please message me if you know of such work).

Select discussion includes, or can be found by searching for:

Thanks to Edo Arad, Miranda Dixon-Luinenburg, Nuño Sempere, Stefan Schubert, Brendon Wong for comments and suggestions.


[1]: What do I mean by “wisdom and intelligence”? I expect this to roughly be intuitive to some readers, especially with the attached diagram and list of example interventions. The important cluster I’m going for is something like “the overlapping benefits that would come from the listed interventions.” I expect this to look like some combination of calibration, accuracy on key beliefs, the ability to efficiently and effectively do intellectual work, and knowledge about important things. It’s a cluster that’s arguably a subset of “optimization power” or “productivity.” I might spend more time addressing this definition in future posts, but thought such a discussion would be too dry and technical for this one. All that said, I’m really not sure about this, and hope that further research will reveal better terminology.

[2]: Longtermists would likely have a higher discount rate than others. This would allow for more investigation of long-term wisdom and intelligence interventions. I think non-longtermist prioritization in these areas could be valuable but would be highly constrained by the discount rates involved. I don’t particularly care about the question of “should we have one prioritization project that tries to separately optimize for longtermist and nonlongtermist theories, or should we have separate prioritization projects?”

[3]: GiveWell, Open Philanthropy (in particular, subgroups focused on specific cause areas), Animal Charity Evaluators, Giving Green, Organization for the Prevention of Intense Suffering (OPIS), Wild Animal Initiative, and more.