Start Under the Streetlight, then Push into the Shadows

See also: Hack Away at the Edges.

The streetlight effect

You’ve heard the joke before:

Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he’s looking for his wallet. When the officer asks if he’s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light’s better here, explains the drunk man.

The joke illustrates the streetlight effect: we “tend to look for answers where the looking is good, rather than where the answers are likely to be hiding.”

Freedman (2010) documents at length some harms caused by the streetlight effect. For example:

A bolt of excitement ran through the field of cardiology in the early 1980s when anti-arrhythmia drugs burst onto the scene. Researchers knew that heart-attack victims with steady heartbeats had the best odds of survival, so a medication that could tamp down irregularities seemed like a no-brainer. The drugs became the standard of care for heart-attack patients and were soon smoothing out heartbeats in intensive care wards across the United States.

But in the early 1990s, cardiologists realized that the drugs were also doing something else: killing about 56,000 heart-attack patients a year. Yes, hearts were beating more regularly on the drugs than off, but their owners were, on average, one-third as likely to pull through. Cardiologists had been so focused on immediately measurable arrhythmias that they had overlooked the longer-term but far more important variable of death.

Start under the streetlight

Of course, there are good reasons to search under the streetlight:

It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.

In retrospect, we might wish cardiologists had done a decade-long longitudinal study measuring the long-term effects of the new anti-arrhythmia drugs of the 1980s. But it’s easy to understand why they didn’t. Decades-long longitudinal studies are expensive, and resources are limited. It was more efficient to rely on an easily-measurable proxy variable like arrhythmias.

We must remember, however, that the analogy to the streetlight joke isn’t exact. Searching under the streetlight gives the drunkard virtually no information about where his wallet might be. But in science and other disciplines, searching under the streetlight can reveal helpful clues about the puzzle you’re investigating. Given limited resources, it’s often best to start searching under the streetlight and then, initial clues in hand, push into the shadows.1

The problem with streetlight science isn’t that it relies on easily-measurable proxy variables. If you want to figure out how some psychological trait works, start with a small study and use free undergraduates at your home university — that’s a good way to test hypotheses cheaply. The problem comes in when researchers don’t appropriately flag the fact their subjects were WEIRD and that a larger study needs to be done on a more representative population before we start drawing conclusions. (Another problem is that despite some researcher’s cautions against overgeneralizing from a study of WEIRD subjects, the media will write splashy, universalizing headlines anyway.)

But money and time aren’t the only resources that might be limited. Another is human reasoning ability. Human brains were built for hunting and gathering in the savannah, not for unlocking the mysteries of fundamental physics or intelligence or consciousness. So even if time and money aren’t limiting factors, it’s often best to break a complex problem into pieces and think through the simplest pieces, or the pieces for which our data are most robust, before trying to answer the questions you most want to solve.

As Pólya advises in his hugely popular How to Solve It, “If you cannot solve the proposed problem, try to solve first some related [but easier] problem.” In physics, this related but easier problem is often called a toy model. In other fields, it is sometimes called a toy problem. Animal models are often used as toy models in biology and medicine.

Or, as Scott Aaronson put it:

...I don’t spend my life thinking about P versus NP [because] there are vastly easier prerequisite questions that we already don’t know how to answer. In a field like [theoretical computer science], you very quickly get used to being able to state a problem with perfect clarity, knowing exactly what would constitute a solution, and still not having any clue how to solve it… And at least in my experience, being pounded with this situation again and again slowly reorients your worldview… Faced with a [very difficult question,] you learn to respond: “What’s another question that’s easier to answer, and that probably has to be answered anyway before we have any chance on the original one?”

I’ll close with two examples: GiveWell on effective altruism and MIRI on stability under self-modification.

GiveWell on effective altruism

GiveWell’s mission is “to find outstanding giving opportunities and publish the full details of our analysis to help donors decide where to give.”

But finding and verifying outstanding giving opportunities is hard. Consider the case of one straightforward-seeming intervention: deworming.

Nearly 2 billion people (mostly in poor countries) are infected by parasitic worms that hinder their cognitive development and overall health. This is also producing barriers to economic development where parasitic worms are common. Luckily, deworming pills are cheap, and early studies indicated that they improved educational outcomes. The DCP2, produced by over 300 contributors and in collaboration with the World Health Organization, estimated that a particular deworming treatment was one of the most cost-effective treatments in global health, at just $3.41 per DALY.

Unfortunately, things are not so simple. A careful review of the evidence in 2008 by The Cochrane Collaboration concluded that, due to weaknesses in some studies’ designs and other factors, “No effect [of deworming drugs] on cognition or school performance has been demonstrated.” And in 2011, GiveWell found that a spreadsheet used to produce the DCP2′s estimates contained 5 separate errors that, when corrected, increased the cost estimate for deworming by roughly a factor of 100. In 2012, another Cochrane review was even more damning for the effectiveness of deworming, concluding that “Routine deworming drugs given to school children… has not shown benefit on weight in most studies… For haemoglobin and cognition, community deworming seems to have little or no effect, and the evidence in relation to school attendance, and school performance is generally poor, with no obvious or consistent effect.”

On the other hand, Innovations for Poverty Action critiqued the 2012 Cochrane review, and GiveWell said the review did not fully undermine the case for its #3 recommended charity, which focuses on deworming.

What are we to make of this? Thousands of hours of data collection and synthesis went into producing the initial case for deworming as a cost-effective intervention, and thousands of additional hours were required to discover flaws in those initial analyses. In the end, GiveWell recommends one deworming charity, the Schistosomiasis Control Initiative, but their page on SCI is littered with qualifications and concerns and “We don’t know”s.

GiveWell had to wrestle with these complications despite the fact that it chose to search under the streetlight. Global health interventions are among the easiest interventions to analyze, and have often been subjected to multiple randomized controlled trials and dozens of experimental studies. Such high-quality evidence usually isn’t available when trying to estimate the cost-effectiveness of, say, certain forms of political activism.

GiveWell co-founder Holden Karnofsky suspects that the best giving opportunities are not in the domain of global health, but GiveWell began their search in global health — under the spotlight — (in part) because the evidence was clearer there.2

It’s difficult to do counterfactual history, but I suspect GiveWell made the right choice. While investigating global health, GiveWell has learned many important lessons about effective altruism — lessons it would have been more difficult to learn with the same clarity if they had begun with investigations of even-more-challenging domains like meta-research and pollitical activism. But now that they’ve learned those lessons, they’re beginning to push into the shadows where the evidence is less clear, via GiveWell Labs.

MIRI on stability under self-modification

MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.”

Many different interventions have been proposed as methods for increasing the odds that smarter-than-human intelligence has a positive impact, but for several reasons MIRI decided to focus its efforts on “Friendly AI research” during 2013.

The FAI research program decomposes into a wide variety of technical research questions. One of those questions is the question of stability under self-modification:

How can we ensure that an AI will serve its intended purpose even after repeated self-modification?

This is a challenging and ill-defined question. How might we make progress on such a puzzle?

For puzzles such as this one, Scott Aaronson recommends a strategy he calls “bait and switch”:

[Philosophical] progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′… this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks — each of these advances addressed questions that could rightly have been called “philosophical” before the advance was made.

The recent MIRI report on Tiling Agents performs one such “bait and switch.” It replaces the philosophical puzzle of “How can we ensure that an AI will serve its intended purpose even after repeated self-modification?” (Q) with a better-specified formal puzzle on which it is possible to make measurable progress: “How can an agent perform perfectly tiling self-modifications despite Löb’s Theorem?” (Q’)

This allows us to state at least three crisp technical problems: Löb and coherent quantified belief (sec. 3 of ‘Tiling Agents’), nonmonotonicity of probabilistic reasoning (secs. 5.2 & 7), and maximizing/​satisficing not being satisfactory for bounded agents (sec. 8). It also allows us to identify progress: formal results that mankind had not previously uncovered (sec. 4).

Of course, even if Q’ is eventually solved, we’ll need to check whether there are other pieces of Q we need to solve. Or perhaps Q will have been dissolved by our efforts to solve Q’, similar to how the question “What force distinguishes living matter from non-living matter?” was dissolved by 20th century biology.

Notes

1 Karnofsky (2011) suggests that it may often be best to start under the streetlight and stay there, at least in the context of effective altruism. Karnofsky asks, “What does it look like when we build knowledge only where we’re best at building knowledge, rather than building knowledge on the ‘most important problems?’” His reply is: “Researching topics we’re good at researching can have a lot of benefits, some unexpected, some pertaining to problems we never expected such research to address. Researching topics we’re bad at researching doesn’t seem like a good idea no matter how important the topics are. Of course I’m in favor of thinking about how to develop new research methods to make research good at what it was formerly bad at, but I’m against applying current problematic research methods to current projects just because they’re the best methods available.” Here’s one example: “what has done more for political engagement in the U.S.: studying how to improve political engagement, or studying the technology that led to the development of the Internet, the World Wide Web, and ultimately to sites like Change.org...?” I am sympathetic with Karnofsky’s view in many cases, but I will give two points of reply with respect to my post above. First, in the above post I wanted to focus on the question of how to tackle difficult questions, not the question of whether difficult questions should be tackled in the first place. And conditional on one’s choice to tackle a difficult question, I recommend one start under the streetlight and push into the shadows. Second, my guess is that I’m talking about a broader notion of the streetlight effect than Karnofsky is. For example, I doubt Karnofsky would object to the process of tackling a problem in theoretical computer science or math by trying to solve easier, related problems first.

2 In GiveWell’s January 24th, 2013 board meeting (starting at 6:35 in the MP3 recording), GiveWell co-founder Holden Karnofsky said that interventions outside global health are “where we would bet today that we’ll find… the best giving opportunities… that best fulfill GiveWell’s mission as originally [outlined] in the mission statement.” This doesn’t appear to be a recently acquired view of things, either. Starting at 22:47 in the same recording, Karnofsky says “There were reasons that we focused on [robustly evidence-backed] interventions for GiveWell initially, but… the [vision] I’ve been pointing to [of finding giving opportunities outside global health, where less evidence is available]… has [to me] been the vision all along.” In personal communication with me, Karnofsky wrote that “We sought to start ‘under the streetlight,’ as you say, and so focused on finding opportunities to fund things with strong documented evidence of being ‘proven, cost-effective and scalable.’ Initially we looked at both U.S. and global interventions, and within developing-world interventions we looked at health but also economic empowerment. We ended up focusing on global health because it performed best by these criteria.”