The Assumed Intent Bias

Summary: when thinking about the behavior of others, people seem to have a tendency to assume clear purpose and intent behind it. In this post I argue that this assumption of intent quite often is incorrect, and that a lot of behavior exists in a gray area where it’s easily influenced by subconscious factors.

This consideration is not new at all and relates to many widely known effects such as the typical mind fallacy, the false consensus effect, black and white thinking and the concept of trivial inconveniences. It still seems valuable to me to clarify this particular bias with some graphs, and have it available as a post one can link to.

Note that “assumed intent bias” is not a commonly used name, as I believe there is no commonly used name for the bias I’m referring to.

The Assumed Intent Bias

Consider three scenarios:

  1. When I quit my previous job, I was allowed to buy my work laptop from the company for a low price and did so. Hypothetically the company’s admins should have made sure to wipe my laptop beforehand, but they left that to me, apparently reasoning that had I had any intent whatsoever to do anything shady with the company’s data, I could have easily made a copy prior to that anyway. So they further assumed that anyone without a clear intention of stealing the company’s data would surely do the right thing then, and wipe the device themselves.

  2. At a different job, we continuously A/​B-tested changes to our software. One development team decided to change a popular feature, so that using it required a double click instead of a single mouse click. They reasoned that this shouldn’t affect feature usage of our users, because anyone who wants to use the feature can still easily do it, and nobody in their right mind would say “I will use this feature if I have to click once, but two clicks are too much for me!”. (The A/​B test data later showed that the usage of that feature had reduced quite significantly due to that change)

  3. In debates about gun control, gun enthusiasts sometimes make an argument roughly like this: gun control doesn’t increase safety, because potential murderers who want to shoot somebody will find a way to get their hands on a gun anyway, whether they are easily and legally available or not.[1]

These three scenarios all are of a similar shape: Some person or group (the admins; the development team; gun enthusiasts) make a judgment about the potential behavior (stealing sensitive company data; using a feature; shooting someone) of somebody else (leaving employees; users; potential murderers), and assume that the behavior in question happens or doesn’t happen with full intentionality.

According to this view, if you plotted the number of people that have a particular level of intent with regards to some particular action, it may look somewhat like this:

The assumed intent bias: assuming that every person out there has a clear intention to act in a certain way (value of 1 on the x axis), or not to act in that way (value of 0). E.g. a product designer may assume that every user who uses their product has a clear idea of what they want to achieve, and how to use the software to do so, so the user will have a plan in mind that either involves using a particular feature or not.

This graph would represent a situation where practically every person either has a strong intention to act in a particular way (the peak on the right), or to not act in that way (peak on the left).

And indeed, in such a world, relatively weak interventions such as “triggering a feature on double click instead of single click”, or “making it more difficult to buy a gun” may not end up being effective: while such interventions would move the action threshold slightly to the right or left, this wouldn’t actually change people’s behavior, as everyone stays on the same side of the threshold. So everybody would still act in the same way they would otherwise.

In the world of the assumed intent bias, weak interventions that slightly move the action threshold don’t really matter: people are far away from that middle area of indecision, so they remain unaffected by the intervention.

However, I think that in many, if not most, real life scenarios, the graph actually looks more like this:

Sometimes most people have a somewhat clear intent, but still there’s a considerable number of people existing in between as well. An example may be vegetarianism: most people have a strong intention to eat meat (left peak), while vegetarians have a strong intention not to eat meat (right peak). But there’s surely some number of people who are somewhere in between, “on the edge” so to speak – for those, weak interventions might make the difference between them actually deciding to go vegetarian or not.

Or even this:

In some cases, the majority of people may even not care that much at all about some behavior, and hence have no clear intent in either direction. An example of this may be which brand of napkins people buy. While there are surely some people who either love or hate any given napkin brand, most people will just buy whichever is easiest available to them, so even small interventions such as the placement in the store, the color of the packaging, or small differences in pricing, can shift people’s behaviors.

In these cases, only a relatively small number of people have a clear and strong intention with regards to the behavior, and a lot of people are somewhere in between. Here it gets apparent that even subtle interventions have an impact: moving the action threshold even slightly to either direction will mean that some people will now end up on the other side of the threshold than before – their intention was just weak enough that adding a tiny additional inconvenience will now avert them from that behavior, or vice versa, their intention was just strong enough that making the action a tiny bit easier causes them to now follow that behavior.

If some people don’t have a clear intention but are distributed over the whole spectrum of possible levels of intent, this means that even small changes to the action threshold (such as making the behavior in question very slightly easier/​more difficult/​more obvious/​less appealing/​…) will have real effects on people’s behavior. The people in the area marked red would now end up on the other side of the action threshold, so they are affected by the intervention, even though people may not even be consciously aware of that fact.

The impact of such interventions then depends on the distribution of people’s intent levels: the more people are undecided and/​or don’t care about the given behavior, the greater the number of individuals that will react to interventions that move the threshold.

Of course, and I think this is partially where this fallacy is coming from, such cases are not as salient, and more difficult to imagine. It’s easy to think of a person who clearly wants to use a given feature of my software, or one who doesn’t need that feature. But what does a user look like who uses the feature when it requires a single click, but doesn’t use the feature if it requires a double click? Clearly the data shows that these people exist, but I couldn’t easily point to a single person for whom that intervention is decisive in determining their behavior. What’s more, these people themselves are probably not even aware that this minor change affected them.

How to Avoid the Assumed Intent Bias

As with many biases, knowing the assumed intent bias is half the battle. In my experience the most common way in which people fall victim to this bias is when they argue that certain environmental changes, e.g. affecting how convenient or easy or obvious it is to follow some particular behavior, will not have any impact on how people actually behave. This view makes sense only when you assume the naive view of every person having clear intent with regards to that action. But it falls apart when taking into account that in most scenarios people are actually distributed over the whole spectrum of intention levels.

Software development and policy are domains in which this bias occurs rather frequently. But it basically affects any area that deals with influencing (or understanding) the behavior of others.

If you’re aware of the importance of trivial inconveniences and are used to nuanced thinking, then you’re probably on the safe side. But it still doesn’t hurt to stay on the lookout for people (including yourself) arguing about the intentions of unidentified others. And if this happens, be aware that it’s easy to assume intent where there is little.


A lack of intent – in this case meaning being close to 0.5 on the graphs above – can mean at least two things: that people are simply undecided about something, or that they are unaware and/​or don’t care. Most people don’t care to the same degree about what you care about. If I’ve been working for Generic Napkin Company for years and love the product, then it’s easy for me to assume that most of our customers also care deeply about our product. It might not occur to me that the vast majority of customers who buy the product don’t even know my company’s name, and would barely notice if their store decided to replace the product with that of a competing brand. They don’t waste a second thought on these napkins – they just buy them because it is the convenient and simple thing to do, and then they forget about it. And while this is not necessarily representative of most other decisions humans make, it still probably doesn’t hurt to realize that most people care much less about the decisions that we care about while we are thinking about how to change other people’s behavior.

(Update 2023-11-06: fixed a detail in the second graph)

  1. ^

    Of course this is not the only or even the main argument; I’m not meaning to make any argument for or against gun control here, but just point out that one particular argument is flawed