Necessary, But Not Sufficient

There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.

In mechanical domains, we seem to have little problem with the idea that things can be “necessary, but not sufficient”. For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so. It has to have fuel, ignition, and compression, and oxygen… each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.

And usually, we don’t go around claiming that “fuel” is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.

For some reason, however, we don’t seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity… such as ourselves.

When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems. And recently, with taw’s post about blood sugar and akrasia, I’ve realized that the specific thing bothering me is the absence of causal-chain reasoning there.

When I was a kid, I remember reading once about a scientist saying that the problem with locating brain functions by what’s impaired when somebody has brain damage there, is that it’s like opening up a TV set and taking out a resistor. If the picture goes bad, you might then conclude that the resistor is the “source of pictureness”, when all you have really proved is that the resistor (or brain part) is necessary for pictureness.

Not that it’s sufficient.

And so, in every case where an akrasia technique works for you—whether it’s glucose or turning off your internet—all you have really done, is the equivalent of putting the missing resistor back into the TV set.

This is why “different things work for different people”, in different circumstances. And it’s why “magic bullets” are possible, like vitamin C as a cure for scurvy. When you fix a deficiency (as long as it’s the only deficiency present) then it seems like a “magic” fix.

But, just because some specific deficiency creates scurvy, akrasia, or no-picture-on-the-TV-ia, this doesn’t mean the resistor you replaced is therefore the ultimate, one true source of “pictureness”!

Even if you’ve successfully removed and replaced that resistor repeatedly, in multiple televisions under laboratory conditions.

Unfortunately, it seems that thinking in terms of causal chains like this is not really a “natural” feature of human brains. And upon reflection, I realize that I only learned to think this way because I studied the Theory of Constraints (ToC) about 13 years ago, and I also had a mentor who drilled me in some aspects of its practice, even before I knew what it was called.

But, if you are going to reason about complex problems, it’s a very good tool to have in your rationalist toolkit.

Because, what the Theory of Constraints teaches us about problem solving, is that if you can reason well enough about a system to identify which necessary-but-not-sufficient conditions are currently deficient (or underpowered relative to the whole), then you will be able to systematically create your own “magic bullets”.

So, I encourage you to challenge any fuzzy thinking you see here (or anywhere) about “magic bullets”, because a magic bullet is only effective in cases where it applies to the only insufficiency present in the system under consideration. And having found one magic bullet, is not equivalent to actually understanding the problem, let alone understanding the system as a whole. (Which, by the way, is also why self-help advice is so divergent: it reflects a vast array of possible deficiencies in a very complex system.)