Adjectives from the Future: The Dangers of Result-based Descriptions

Less Skeptical

Suppose your friend tells you he’s on a weight-loss program. What do you think will happen in three months if he keeps on the weight-loss program? Will he lose weight?

If you’re like me, you’re thinking, “Of course. He is on a weight-loss program, isn’t he? So, ipso facto, he is likely to lose weight.”

Does there seem to be anything fishy about that chain of reasoning?

We usually describe the current features of a thing and predict something about the future. For example, we might say “I’m running for half an hour each day” and predict that we will lose a certain number of pounds by the end of the month. But your friend above skipped the description and talked about the prediction as if it were visible right now: “I’m on a weight-loss program”.

You weren’t told the features of the activity (running for half an hour) or even a name (CrossFit program). If you had been told either, you could have judged it based on your past knowledge of those features or names. Running regularly does help you lose weight and so does CrossFit. But, here, you were told just the prediction itself. This means you can’t predict anything for sure. If his program involves running, he will lose weight; if it involves eating large cheese pizzas, he won’t. You don’t know which it is.

Yet, it sounded quite convincing! Even if you objected by saying that your friend probably won’t stick to the exercise regimen, you probably bought into the premise, like me, that the program was a weight-loss program.

Hypothesis: If you are given an adjective that describes a future event and are not given any currently-visible features, then you’re more likely to accept that that future event will occur than when you can see some features.

In other words, result-based descriptions make you less skeptical.

A more serious example is when someone mentions a drug-prevention program. We might assume that it will prevent illegal drugs from being bought and sold. After all, it must have been designed for that purpose. But the result depends on what the program actually does. Running ads saying “Don’t do drugs!” may not achieve much, whereas inspecting trucks at border checkpoints may. To judge whether the program will be successful, you have to inspect its actual features. But “drug-prevention program” sounded convincing, right? Notice how the adjective “drug-prevention” describes a future event—it says that drugs will be prevented in the future. Now, since you can’t look into the future and tell whether drugs were in fact prevented, you shouldn’t accept such an adjective. And since you’re not told anything else about the program, you really can’t say anything either way. And yet it sounds so convincing!

Similarly, take environment-protection laws. Again, surely they must have been designed for the purpose of protecting the environment. Don’t you feel like they will protect the environment? Contrast that to saying “a law that raises the tax rate on fossil fuels”. Now this may or may not protect the environment in terms of air pollution, but at least you don’t jump to that conclusion right away.

If this hypothesis is true, it means that the person who chooses the adjective can mislead you (and himself) in the direction he desires by describing the thing in terms of the result and by omitting any features.[1] Suppose someone tells you this is an earthquake-resistant building. Do you believe that it will withstand earthquakes better than ordinary buildings? I do. He may have described the thing solely in terms of the result, but it still sounded convincing, right? Contrast that to “this building is made out of steel-reinforced concrete”. Now, you have one feature of the building. If you had to predict whether it would withstand earthquakes better than ordinary buildings, you would lean towards yes because reinforced concrete has worked in the past. But you wouldn’t always jump to the conclusion that it was “earthquake-resistant”. If I said “this building is made out of green-colored brick”, you would be skeptical about its ability to withstand earthquakes better because you haven’t heard anything about brick color being relevant.

The above illusion is compounded by the fact that you won’t get feedback from others about your mistaken ideas if you use result-based descriptions[2]. Suppose your weight-loss friend assumed that using a telemarketed ab machine will help him get abs (it’s right there in the name, I tell you!). Even then, he wouldn’t have been lost if he had told you his concrete plan. You would have corrected his belief as soon as you stopped laughing at him. But since he told you that he’s using a weight-loss program, you couldn’t really correct him. He might go on behaving as if that silly “ab machine” is going to get him six-pack abs by summer.

Why do we even accept descriptions that have nothing except a description about the future?

For one, it matters that no features are described. If I said that I was drinking lemonade, you wouldn’t really predict that I would lose weight. You would ask me what evidence I have for lemonade causing weight-loss. But what if I said I was having a weight-loss drink? You might be less skeptical as long as you didn’t look at my glass. Who knows; maybe there are drinks out there that cause weight-loss.

Another relevant factor is the speaker’s credibility how often we think the speaker sees the underlying features along with the eventual result. We accept an expert’s result-based description because we trust that he knows the features that lead to the result and is just omitting them when talking to a layman. When a doctor says that these are “sleeping pills”, we are more likely to accept it than when a school boy does—the doctor knows that the pills contain benzodiazepine, which usually works. When a politician calls something a “drug-prevention program”, we are more likely to accept it than when a housewife says it—the politician knows that border-checks (or whatever) have worked in the past. However, this might be misleading when the expert is dealing with something novel, such as a brand-new pill formula or a brand-new approach to drug regulations, since he is unlikely to have seen the result of those features (or may not care very much about deceiving the voters).

Finally, such descriptions might be fine when talking about the past. Saying that “I went on a weight-loss program and lost 50 pounds” is a bit redundant, but harmless. You actually observe the result there, so you can decide based on the result how skeptical to be. You won’t blindly jump to the conclusion that it will work as when someone says “I’m on a weight-loss program right now”.

So, we should avoid describing something only in terms of the result and should describe it using features instead. And if anyone tries to bias our prediction by sneaking in an adjective from the future, we should stop and ask for the features.

Examples of Adjectives from the Future

Here are some result-based descriptions that I collected from news reports and books as I was testing the above hypothesis. All of them talk about future results, completely omit current features, and seem to make us less skeptical about the plan’s success. Did you fall for any of them?


Rehabilitation program—Don’t you feel like the drug addict is likely to get better after going to the rehab program? It’s right there in the name! Notice that there are no features mentioned, just a description of the future as though it were the present. Contrast that to “not having access to drugs for 30 days, listening to lectures, and talking about your experiences”. This doesn’t make us jump to the conclusion that the addict will get better. We might even be skeptical about the power of lectures to fight off the temptation of drugs. For a real-world contrast, think of “the 12-step program”. It too tries to overcome addiction, but it is described in terms of the features (12 steps), not the desired result (overcoming addiction). In fact, it sounds like work, which it probably is. A rehabilitation program doesn’t quite sound like that.

Peace process—Feels like it is likely to lead to peace. No features; only desired results. Contrast that to “shaking hands and signing agreements in front of the world press”. We may be more skeptical that that will prevent future wars. But in the former case, we would be insulated from feedback because we keep talking about the “peace process” instead of the “hand-shaking and agreement-signing”.

Wait. Aren’t there people who distrust the peace process and talk about its possible failure? I suspect that they do so after mentioning features of the process. They might say that this dictator has reneged on his promises in the past and thus should not be trusted right now. It would sound ludicrous if they expressed skepticism without any features. People would ask, “What do you mean this peace process may not bring about peace? It’s a peace process.”

Dangerous driving—Doesn’t it seem likely that the driver is going to get into trouble? No features; no feedback; only the future result—danger. Contrast that to: one-handed driving, texting while driving, or overtaking cars by switching lanes. We are a bit more skeptical that it will cause danger.

Cost-cutting measures—Need I say anything? Of course the cost-cutting measure is going to cut costs. Why else would they have called it a cost-cutting measure? Contrast that to “switching to online advertising” or “encouraging working from home a few days a week”, which we are more skeptical about, since they may or may not bring down ultimate costs.

Healthy morning drink—No features, but it sounds like it will lead to health. Even the “morning” part is not a description of a feature of the object. It just talks about the time when people will drink it. Contrast to: drink containing 15g of protein and other stuff, which may or may not lead to more “health”.

Recidivism-reduction classes for ex-convicts, i.e., making sure they don’t go back to jail after getting out—Again, we feel like these classes will make them less likely to go back in. The classes reduce recidivism, after all. No features mentioned; description in terms of the future result (recidivism-reduction); insulated from feedback. Contrast that to “lectures and reading books and stuff”. We might be much more skeptical.

You can find any number of examples like these: national security bill vs a bill that increases the number of fighter jets; sufficiently well-funded program vs same budget as last year (which may not be enough this year); a Sudoku-solving program vs program that solved a set of easy and medium Sudoku puzzles.

How does this apply to LessWrong?

Now, let’s look at some descriptions that may be important to us as LessWrong readers.

Effective altruism—Doesn’t effective altruism feel like it will be effective? And altruistic? I feel inclined to believe so. But the name talks about the future results and doesn’t mention any current features. Contrast that to “cash transfers” or even “evidence-based donations” and “evidence-based job changes”, which talk about currently-available evidence, not future results. We may be more skeptical that such cash transfers or donations will be effective or even altruistic. “Cause prioritization” talks about a feature of the process right now. We can see a clear gap between the causes we prioritize and their eventual effectiveness. That gap doesn’t even seem to exist when we talk about effective altruism.

When I hear “Against Malaria Foundation”, I feel like it is likely to strike a blow against malaria. All it needs is the money. But if I were to hear “Mosquito Net Distributors”, I would ask quite a few questions about the effectiveness of mosquito nets. I may indeed get convinced that a dollar spent on nets will go farther than on other methods to fight malaria, but I won’t jump to that conclusion. I may even think of how it might backfire or how mosquitoes might adapt. Not so with “Against Malaria Foundation”.

Notice how future-based adjectives could make a cause immune to feedback. If you were to mention that you won’t donate to, say, AMF, people could raise their eyebrows, “Are you seriously against fighting against malaria?”. But if you mention the means, you can safely say that you are in favour of fighting malaria, but against focusing on mosquito nets.

Finally, if “Mosquito Net Distributors” sounds a bit too sober because it doesn’t mention its purpose, perhaps we could combine the two as “Mosquito Nets to Fight Malaria”. [3]

Rationality techniques—When I see the term “rationality technique” or “rationality training” or “methods of rationality”, I feel like the technique will lead to good, if not optimal, results. It doesn’t describe any features after all; it just promises that good things will happen in the future. Contrast that to experimentation techniques or logical deduction. These talk about the features of the process and I don’t assume that these will always get me the best results, since I know I might miss a confounding variable or apply rules incorrectly. I’m not quite as skeptical when I hear about the “methods of rationality”.

Even when I look at concrete technique names, hearing about the CFAR technique of “Comfort Zone Expansion (CoZE)” makes me feel like it will actually expand my “comfort zone”. But it doesn’t mention any features; just the desired future result. Contrast that to “doing for an hour, in public, a few things you avoided doing in the past”. Now, I pause when I ask myself if it will help me do what you or I may actually care about: ask a boss for a raise, tell an annoying colleague to shove it, or ask out a crush. I can tell that there is quite a gap between lying down on the pavement for 30 seconds and doing something that might jeopardize my work life. But when I hear “Comfort Zone Expansion”, I really do feel like my “comfort zone” will be expanded, meaning that I will do those kinds of things more frequently. Why not call it “uncomfortable-action practice” or the original “exposure therapy”?

Brain emulation or brain-emulating software—“How sure are you that brain emulations would be conscious?” (source)

My immediate response is that, of course, brain emulations would be conscious. If human brains are conscious (whatever that means) and if human brain emulations emulate human brains, then those would also be conscious. The very term seems to dispose me to a particular answer. It doesn’t describe any present features, just the desired future results—that the program will behave like a human brain in most respects.

Imagine if we used a term that talked only about whichever observable tests you want: “How sure are you that, say, a DARPA Grand Challenge-winning program would be conscious?” Suddenly, we are given two separate variables and asked to bridge the gap between them. That gives us a lot more room for skepticism. We can see that there could be many a slip between its present features and its future results.

I reason just as naively about claims like whole brain emulation can be “an easy way to create intelligent computers” or will acquire the “information contained within a brain”, since human brains are already intelligent and already contain information. Given that this is a field where no one has succeeded, i.e., no one has emulated a human brain, we should take pains to avoid terms that make us less skeptical.

Optimization power—Lastly, take this description of a car design: “To hit such a tiny target in configuration space requires a powerful optimization process. The better the car you want, the more optimization pressure you have to exert—though you need a huge optimization pressure just to get a car at all.”

I find myself agreeing with that. A car that travels fast is highly-optimized, so of course it would need a powerful optimization process.

Unfortunately, “optimization process” does not describe any present features of the process itself. It simply says that the future result will be optimized. So, if you want something highly-optimized, you’d better find a powerful optimizer. Seems to make sense even though it’s a null statement! But if you describe any features, as in “the design of a car requires 1 teraflop of computing power for simulation”, I immediately ask, is that too little computing power? Too much? I become a lot more skeptical.

Again, this suggests that, in such a novel domain, we should be more careful about avoiding result-based descriptions like “optimization power”, “superintelligence”, and “self-improving AI”.

Should we always avoid Result-Based Descriptions?

No. I don’t think it’s possible and I don’t think people would want it. Like I said above, when I go to the doctor, I may just want “sleeping pills”, not “benzodiazepine”. Speaking about the latter would be a waste of time for the doctor and for me, provided I trust him. But what if I don’t trust the person or if he’s deluded himself?

I would reserve this technique for occasions when you’re accepting an important pitch a pitch that asks for a big investment, either in business or politics or social circumstances. People may try to convince us to accept a “career-defining opportunity” (instead of a shift to another department, which may not define your career) or a “jobs-for-the-poor program” (instead of a law that reserves X% of infrastructure jobs, which may not be filled and may not employ all the poor) or a “life-changing experience” (instead of skydiving for six minutes, which may or may not change your life much).

When it comes to our own usage, as people who want to portray an accurate map of reality, we should avoid using such result-based descriptions that might mislead others and, most importantly, ourselves. Marketing may demand a title that sounds catchy, but you have to decide whether you want to risk deceiving others, especially when you’re pitching an idea that will ask them to invest a lot.

What’s in a name? Isn’t it ok to have the name based on the result as long as the contents tell you the features? Well, that would be ok if people always mentioned the contents. But we usually omit the contents when referring to something and someone who is new or busy may not look at the contents. Thus they (and we) might get misled into predicting the result based on the title. A person donating to an organization or paying for a workshop may see only the title, perhaps a few testimonials from friends, and maybe some headings on the website. If all of these descriptions are result-based, he might think that the organization or workshop does, in fact, have a good chance of delivering those results. If he had been given the features, maybe he would have been much more skeptical.


Let me know your thoughts below. Does the basic hypothesis seem valid? What about some of its implications?

Edit: Made it clearer that I’m claiming result-based descriptions make you less skeptical, not that they convince you absolutely.


  1. ↩︎

    There is a similar phenomenon in goal-setting where they distinguish between outcome goals (such as losing 10 pounds) and process goals (such as going to the gym four times a week). However, the focus there is on which goal-setting style is more effective in getting results. My focus here is on which type of description makes you more gullible. The two may be related.

  2. ↩︎

    Isn’t “result-based description” itself a result-based description, an adjective from the future? I don’t think so. It’s something you can observe right now. Specifically, if the description isn’t fully determined by past features, then it’s a result-based description. (Contrast that to “misleading description”.)

  3. ↩︎

    And, yes, “mosquito net” is itself a result-based description, since you expect it to keep out mosquitoes, but at least it mentions one feature—the net.