Value of the Long Tail

Suppose we have a self-driving car which works 99% of the time. A human driver only needs to intervene to prevent an accident on one trip out of 100. How much economic value does this generate, relative to a full human-level-or-better self-driving car?

I would guess less than 10%, maybe even less than 1% of the value. Why? Because with a 1% error rate, we still need a human driver, and the vast majority of the value of self-driving cars comes from removing the human. Things like automated emergency braking, parallel parking, warning systems, or cruise control definitely have some value—but they’re nowhere near the value of giving every worker in the United States one extra hour every weekday (roughly the average round-trip commute time).

I think this sort of problem is far more common than most engineers intuitively expect. There’s a lot of areas where it seems like it should be easy to automate, if not all of the work, at least 90% of it. It looks like there’s metaphorical hundred-dollar bills lying on the ground. But I think, in most of these cases, automating 90% just doesn’t generate all that much value—because you still need a human watching everything, waiting to jump in as soon as the other 10% comes up. The vast majority of the value comes from taking the human out of the loop.

Personally, I ran into this at a mortgage startup. We wanted to automate as much of the approval process as possible; we figured at least 90% of approval conditions (weighted by how often they’re needed) should be tractable. In retrospect, that was true − 90% of it was pretty tractable. But we realized that, even with the easy 90% automated, we would still need humans most of the time. The large majority of our loans had at least some “hair” on them—something which was weird and needed special handling. Sometimes it was FHA/​VA subsidies (each requiring a bunch of extra legwork). Sometimes it was income from a side-gig or alimony. Sometimes it was a condition associated with the appraisal—e.g. a roof repair in-progress. Sometimes it was an ex-spouse on the title. No single issue was very common, but most loans had something weird on them. And as soon as a human needed to be in the loop, at all, most of the automation value was gone—we couldn’t offer substantive instant results.

In general, when would we expect this problem to show up?

I see two main ways to circumvent the issue:

  • Tailor the environment to make it more predictable, e.g. factories.

  • Have one human oversee multiple tasks, and jump in when needed, e.g. attendants at automated checkouts.

The problem should show up mainly when circumvention fails. So, we’d expect the majority of automation-value to be in the long tail when:

  • The environment cannot be made more predictable—there are messy external conditions beyond the control of engineers.

  • Having one human oversee multiple tasks doesn’t help.

The first condition is both self-explanatory and common; the second condition is the probably the limiting factor more often. When and why would having one human oversee multiple tasks not be helpful?

The self-driving car and mortgage examples offer some possible reasons. In both cases, reaction time is a key issue. For the car, we need a reaction fast enough to avert an accident; for the mortgage, we want to offer substantive instantaneous approval checks. In either case, drawing the attention of a human and asking them to solve the problem would take too long.

Another factor is necessary context. For some mortgages—especially those near the approval boundary—there’s a lot of interdependence between requirements. We need to document A or B or C (unless D), and B is only an option if E but not A, and so forth. The overall effect is that a human needs the whole context of the mortgage application in order to handle any one piece of it. When that happens, the value-add of automation is very low; even if some subset of the work is automated, the human still needs to figure out all the context, and that’s where most of the work is.

To summarize, the value of automation will mostly be in the long tail when:

  • There are messy external conditions beyond the control of engineers, and

  • At least one of:

    • Reaction time matters

    • Lots of context is necessary to handle any one issue

I’m curious to hear peoples’ thoughts on other relevant factors.