An overview of the mental model theory

There is dis­pute about what ex­actly a “men­tal model” is and the con­cepts re­lated to it are of­ten aren’t clar­ified well. One fea­ture of them that is gen­er­ally ac­cepted is that “the struc­ture of men­tal mod­els ‘mir­rors’ the per­ceived struc­ture of the ex­ter­nal sys­tem be­ing mod­el­led.” (Doyle & Ford, 1998, p. 17) So, as a start­ing defi­ni­tion we can say that men­tal mod­els, in gen­eral, are rep­re­sen­ta­tions in the mind of real or imag­i­nary situ­a­tions. A full defi­ni­tion won’t be at­tempted be­cause there is too much con­tention about what fea­tures men­tal mod­els do and do not have. The fea­tures that are ac­cepted will be de­scribed in de­tail which will hope­fully lead you to gain a in­tu­itive un­der­stand­ing of what men­tal mod­els are prob­a­bly like.

The men­tal model the­ory as­sumes that peo­ple do not in­nately rely on for­mal rules of in­fer­ence, but in­stead rely on their men­tal mod­els which are based on their un­der­stand­ing of the premises and their gen­eral knowl­edge. A foun­da­tional prin­ci­ple of the men­tal model the­ory is the prin­ci­ple of truth which states that “rea­son­ers rep­re­sent as lit­tle in­for­ma­tion as pos­si­ble in ex­plicit mod­els and, in par­tic­u­lar, that they rep­re­sent only in­for­ma­tion about what is true” (John­son-Laird & Savary, 1996, p. 69) In­di­vi­d­u­als do this to to min­i­mize the load on work­ing mem­ory.

Con­sider the fol­low­ing ex­am­ple: “There is not a king in the hand, or else there is an ace in the hand”. If we take ‘¬’ to in­di­cate nega­tion, then ac­cord­ing to the prin­ci­ple of truth rea­son­ers will con­struct only two sep­a­rate mod­els for this ex­am­ple: (¬ king) and (ace). Note that if a propo­si­tion is false, then it’s nega­tion is true. In this ex­am­ple (¬ king) is true.

Look­ing at the men­tal mod­els of this ex­am­ple we can say that they, like most oth­ers, rep­re­sents the liter­als in the premises when they are true in the true pos­si­bil­ities, but not when they are false. This means that it does not in­clude (king) and (¬ace) which are false. To keep track of what is false rea­son­ers make men­tal “foot­notes”. This can be prob­le­matic as these foot­notes are hard to re­mem­ber and peo­ple also tend to only con­sider what is rep­re­sented in their men­tal mod­els of a situ­a­tion.

The men­tal model the­ory is not with­out its crit­ics and no one knows for sure how peo­ple rea­son, but the the­ory does make some pre­dic­tions which have em­piri­cal sup­port and these pre­dic­tions also lead to cer­tain sys­tem­atic fal­la­cies which have been found to have em­piri­cal sup­port. The prin­ci­ple pre­dic­tions of the men­tal model the­ory are that (John­son-Laird, Girotto, & Le­grenzi, 2005, p. 11-12):

  • Rea­son­ers nor­mally build mod­els of what is true, not what is false. This is known as the prin­ci­ple of truth and it leads to two main sys­tem­atic fal­la­cies:the illu­sion of pos­si­bil­ity and the illu­sion of im­pos­si­bil­ity.

  • Rea­son­ers tend to fo­cus on one of the pos­si­ble mod­els of multi-model prob­lems, and are thereby led to er­ro­neous con­clu­sions and ir­ra­tional de­ci­sions. This means that they do not con­sider al­ter­na­tive mod­els and this leads to the fo­cus­ing effect which is similar to the fram­ing effect in psy­chol­ogy.

  • Rea­son­ing is eas­ier from one model than from mul­ti­ple mod­els. This is known as the dis­junc­tive effect.

A con­se­quence of the prin­ci­ple of truth (peo­ple don’t put false pos­si­bil­ities in their model) is the illu­sion of pos­si­bil­ity which is demon­strated in the be­low prob­lem (Try to solve it):

Be­fore you stands a card-deal­ing robot. This robot has been pro­grammed to deal one hand of cards. You are go­ing to make a bet with an­other per­son on whether the dealt hand will con­tain an ace or whether it will con­tain a king. If the dealt hand is just a sin­gle queen, it’s a draw. Note that the robot is a black box. That is, you don’t know any­thing about how it works, for ex­am­ple the al­gorithm it uses. You do, how­ever, have two state­ments about what the pos­si­bil­ities of the dealt hand could be. Th­ese two state­ments are from two differ­ent de­sign­ers of the robot. The prob­lem is that you know that one of the de­sign­ers lied to you (their state­ment is always false) and the other de­signer told the truth. You don’t know which one is tel­ling the truth. This means that you know that only one of the fol­low­ing state­ments about the dealt hand is true.

  • The dealt hand will con­tain ei­ther a king or an ace (or both).

  • The dealt hand will con­tain ei­ther a queen or an ace (or both).

Based on what you know, should you bet that the dealt hand will con­tain an Ace or that it will con­tain a King?

If you think that the ace is the bet­ter bet, then you would have made a los­ing bet. This is be­cause it is be­cause it is im­pos­si­ble for an ace to be in the dealt hand. Only one state­ment about the dealt hand of cards is true. This fact pre­cludes the pos­si­bil­ity that the Ace will be in the dealt hand. The Ace is in both state­ments and both state­ments can­not be true as per the re­quire­ment. A con­se­quence of this is that it is im­pos­si­ble for the dealt hand to con­tain an ace which means that the it is more likely for a king to be in the dealt hand. Thus, you should have bet that the hand will con­tain a king.

If you still don’t be­lieve that the king is the bet­ter bet, then see this post where I go into the ex­plicit de­tails on this prob­lem.

Con­fi­dence in solv­ing the prob­lem has been ruled out as a con­tribut­ing fac­tor to the illu­sion. Peo­ple’s con­fi­dence in their con­clu­sions did not differ re­li­ably from the con­trol prob­lems and the prob­lems that were ex­pected to in­duce the illu­sion of pos­si­bil­ity (Gold­varg & John­son-Laird, 2000, p. 289)

Another ex­am­ple is (Yang & John­son-Laird, 2000, p. 453) :

Only one of the fol­low­ing premises is true about a par­tic­u­lar hand of cards:

  1. There is a king in the hand or there is an ace, or both

  2. There is a queen in the hand or there is an ace, or both

  3. There is a jack in the hand or there is a ten, or both

Is it pos­si­ble that there is an ace in the hand?

Nearly ev­ery­one re­sponds “yes” which is in­cor­rect. This is be­cause the pres­ence of the ace would ren­der two of the premises true. As per the given rule, only one of the premises is true. Thereby, mean­ing that only the 3rd premise can be true. This means that there can­not be an ace in the hand. The rea­son, in sum­mary, for why peo­ple re­spond in­cor­rectly is that they con­sider the first premise and con­clude that an ace is pos­si­ble. Then, they con­sider the sec­ond premise and reach the same con­clu­sion, but they fail to con­sider the falsity of the other premise, i.e. if premise 1 is true then premise 2 must be false to fulfil the re­quire­ment.

The illu­sion of im­pos­si­bil­ity is the op­po­site of the illu­sion of pos­si­bil­ity. It is demon­strated in the be­low prob­lem:

Only one of the fol­low­ing premises is true about a par­tic­u­lar hand of cards:

  1. If there is a king in the hand, then there is not an ace

  2. If there is a queen in the hand, then there is not an ace.

Is it pos­si­ble that there is a king and an ace in the hand.

Peo­ple com­monly an­swered “no” which is in­cor­rect. As you can see in the be­low full ex­plicit model table, rows three and four con­tain an ace.

Men­tal Models

Fully Ex­plicit Models

K

¬A

K

¬A

Q

¬A

Q

¬A

¬K

Q

A

K

¬Q

A

It has been found that illu­sions of pos­si­bil­ity are more com­pel­ling than illu­sions of im­pos­si­bil­ity. (Gold­varg & John­son-Laird, 2000, p. 291) The pur­ported rea­son for this is that “It is eas­ier to in­fer that a situ­a­tion is pos­si­ble as op­posed to im­pos­si­ble.” (John­son-Laird & Bell, 1998, p. 25) This is be­cause pos­si­bil­ity only re­quires that one model of the premises is true, whereas im­pos­si­bil­ity de­pends on all the mod­els be­ing false. So, rea­son­ers re­spond that a situ­a­tion is pos­si­ble as soon as they find a model that satis­fies it, but to find out if a situ­a­tion is im­pos­si­ble they will of­ten need to flesh out their mod­els more ex­plic­itly which has an effect of re­duc­ing the ten­dency for peo­ple to ex­hibit the illu­sion of im­pos­si­bil­ity.

We have already dis­cussed that when peo­ple con­struct men­tal mod­els they make ex­plicit as lit­tle as pos­si­ble (prin­ci­ple of truth). Now, we will look at their propen­sity to fo­cus only on the in­for­ma­tion which is ex­plicit in their mod­els. This is called the fo­cus­ing effect.

Fo­cus­ing is the idea that peo­ple in gen­eral fail to make a thor­ough search for al­ter­na­tives when mak­ing de­ci­sions. When faced with the choice of whether or not to carry out a cer­tain ac­tion, they will con­struct a model of the ac­tion and an al­ter­na­tive model, which is of­ten im­plicit, in which it does not oc­cur, but they will of­ten ne­glect to search for in­for­ma­tion about al­ter­na­tive ac­tions. Fo­cus­ing has been found to be re­duced by ma­nipu­la­tions which make the al­ter­na­tives more available. (Le­grenzi, Girotto, & John­son-Laird, 1993)

Fo­cus­ing leads peo­ple to fail to con­sider pos­si­bil­ities that lie out­side of their mod­els. A con­se­quence of this is that they can over­look the cor­rect pos­si­bil­ity. If you know noth­ing about the al­ter­na­tives to a par­tic­u­lar course of ac­tion, then you can nei­ther as­sess their util­ities nor com­pare them with the util­ity of the ac­tion. Hence, one can­not make a ra­tio­nal de­ci­sion.

The con­text in which de­ci­sions are pre­sented can de­ter­mine the at­tributes that in­di­vi­d­u­als will en­quire about be­fore mak­ing a de­ci­sion. Con­sider the choice be­tween two re­sorts:

  • Re­sort A has good beaches, plenty of sun­sh­ine, and is easy to get to.

  • Re­sort B has good beaches, cheap food, and com­fortable ho­tels.

The fo­cus­ing hy­poth­e­sis im­plies that peo­ple will only at­tempt to seek out in­for­ma­tion in or­der to flesh out the vari­ables in one of the op­tions, but not the other. For ex­am­ple, be­cause they know about the weather in re­sort A they will seek out in­for­ma­tion on the weather in re­sort B. The hy­poth­e­sis also pre­dicts that once these at­tributes have been fleshed out peo­ple will gen­er­ally be­lieve that they can make a de­ci­sion as to which is the best re­sort. In sum­mary, the ini­tial speci­fi­ca­tion of the de­ci­sion acts as a fo­cus for both the in­for­ma­tion that in­di­vi­d­u­als seek and their ul­ti­mate de­ci­sion and con­se­quently they will tend to over­look other at­tributes. For ex­am­ple, they may not con­sider the re­sort’s hos­tility to tourists or any other fac­tor not in­cluded in the origi­nal speci­fi­ca­tion.

The last of the prin­ci­ple pre­dic­tions of the men­tal model the­ory is the dis­junc­tion effect, which ba­si­cally means that peo­ple find it eas­ier to rea­son us­ing one model rather than from mul­ti­ple mod­els. A dis­junc­tion effect oc­curs when a per­son will do an ac­tion if a spe­cific event oc­curs and will do the same ac­tion if the spe­cific event does not oc­cur, but will not do the same ac­tion if they are un­cer­tain whether the spe­cific event will oc­cur. (Shafir & Tver­sky, 1992) This is a vi­o­la­tion of the Sure-Thing Prin­ci­ple in de­ci­sion the­ory (sure things should not af­fect one’s prefer­ences).

The dis­junc­tion effect is ex­plained by men­tal model the­ory. “If the in­for­ma­tion available about a par­tic­u­lar op­tion is dis­junc­tive in form, then the re­sult­ing con­flict or load on work­ing mem­ory will make it harder to in­fer a rea­son for choos­ing this op­tion in com­par­i­son to an op­tion for which cat­e­gor­i­cal in­for­ma­tion is available. The harder it is to in­fer a rea­son for a choice, the less at­trac­tive that choice is likely to be.” (Le­grenzi, Girotto, & John­son-Laird, 1993, p. 64) It has been found that “Prob­lems re­quiring one men­tal model elic­ited more cor­rect re­sponses than prob­lems re­quiring mul­ti­ple mod­els, which in turn elic­ited more cor­rect an­swers than mul­ti­ple model prob­lems with no valid an­swers.” (Schaeken, John­son-Laird, & d’Yde­walle, 1994, p. 205) An an­swer is valid if it is not in­val­i­dated by an­other model. If you have two mod­els (A) and (¬A), then they are both in­val­i­date each other.

The three prob­lems be­low will illus­trate the differ­ence be­tween one model, multi-model and multi-model with no valid an­swer prob­lems.

This first prob­lem can be solved by us­ing only one model.

Premises:

  1. The sus­pect ran away be­fore the bank man­ager was stabbed

  2. The bank man­ager was stabbed be­fore the clerk rang the alarm.

  3. The po­lice ar­rived at the bank while the bank man­ager was be­ing stabbed

  4. The re­porter ar­rived at the bank while the clerk rang the alarm.

What is the tem­po­ral re­la­tion be­tween the po­lice ar­riv­ing at the bank and the re­porter ar­riv­ing at the bank?

This prob­lem yields the model be­low, with time run­ning from left to right, which sup­ports the an­swer that the po­lice ar­rived be­fore the re­porter. The premises do not sup­port any model that re­futes this an­swer and so it is valid. That is, it must be true given that the premises are true.

Model

Sus­pects runs away

Bank man­ager was stabbed

Clerk rang the alarm

The po­lice ar­rived at the bank

The re­porter ar­rived at the bank

This sec­ond prob­lem in­volves the gen­er­a­tion of three pos­si­ble mod­els. This prob­lem differs from the first only in the or­der of events.

Premises:

  1. The sus­pect ran away be­fore the bank man­ager was stabbed

  2. The clerk rang the alarm be­fore the bank man­ager was stabbed

  3. The po­lice ar­rived at the bank while the clerk rang the alarm

  4. The re­porter ar­rived at the bank while the bank man­ager was be­ing stabbed

What is the tem­po­ral re­la­tion be­tween the po­lice ar­riv­ing at the bank and the re­porter ar­riv­ing at the bank?

This prob­lem yields the three mod­els be­low, with time run­ning from left to right. The three mod­els sup­port the an­swer that the po­lice ar­rived be­fore the re­porter be­cause the premises do not sup­port any model that re­futes this an­swer and so it is valid.

Model

Sus­pects runs away

Clerk rang the alarm

Bank man­ager was stabbed

The po­lice ar­rived at the bank

The re­porter ar­rived at the bank

Model

Clerk rang the alarm

Sus­pects runs away

Bank man­ager was stabbed

The po­lice ar­rived at the bank

The re­porter ar­rived at the bank

Model

Sus­pects runs away/​ Clerk rang the alarm

Bank man­ager was stabbed

The po­lice ar­rived at the bank

The re­porter ar­rived at the bank

This third prob­lem in­volves the gen­er­a­tion of three pos­si­ble mod­els and there is also no valid an­swer. This prob­lem is the hard­est out of the three and be­cause of this it had the most in­cor­rect an­swers and took the longest to solve.

Premises:

  1. The sus­pect ran away be­fore the bank man­ager was stabbed

  2. The clerk rang the alarm be­fore the bank man­ager was stabbed

  3. The po­lice ar­rived at the bank while the clerk rang the alarm

  4. The re­porter ar­rived at the bank while the sus­pect was run­ning away

What is the tem­po­ral re­la­tion be­tween the po­lice ar­riv­ing at the bank and the re­porter ar­riv­ing at the bank?

This prob­lem yields the three mod­els be­low, with time run­ning from left to right. There is no valid an­swer in these an­swers. To be valid the same tem­po­ral re­la­tion must be in all of the mod­els.

Model

Sus­pects runs away

Clerk rang the alarm

Bank man­ager was stabbed

The re­porter ar­rived at the bank

The po­lice ar­rived at the bank

Model

Clerk rang the alarm

Sus­pects runs away

Bank man­ager was stabbed

The po­lice ar­rived at the bank

The re­porter ar­rived at the bank

Model

Sus­pects runs away/​Clerk rang the alarm

Bank man­ager was stabbed

The po­lice ar­rived at the bank/​The re­porter ar­rived at the bank

A com­mon ex­am­ple of prob­lems that dis­play dis­junc­tive effects is prob­lems in­volv­ing meta-rea­son­ing, speci­fi­cally rea­son­ing about what oth­ers are rea­son­ing. Con­sider the fol­low­ing prob­lem:

Three wise men who can only tell the truth are told to stand in a straight line, one in front of the other. A hat is put on top of each of their heads. They are told that each of these hats was se­lected from a group of five hats: two black hats and three white hats. The first man, stand at the front of the line, can’t see ei­ther of the men be­hind him or their hats. The sec­ond man, in the mid­dle, can see only the first man and his hat. The last man, at the rear, can see both other men and their hats.

Each wise man must say what color their hat is if they know it. If they don’t, they must say: “I don’t know”. The wise men can­not in­ter­act with each other in any other way. The first wise man who could see the two hats in front of him said, “I don’t know”. The sec­ond wise man heard this and then said, “I don’t know”. What did the last wise man, who had heard the two pre­vi­ous an­swers, then say?

This prob­lem can be solved by con­sid­er­ing the de­duc­tions and mod­els of each of the wise men. The first can only de­duce the colour of their hat if the two wise men in front of him have black hats. Know­ing this and that the first wise man said that they do not know the sec­ond wise man cre­ates the three mod­els be­low to rep­re­sent all the pos­si­bil­ities given the in­for­ma­tion that he knows.

Model

First wise man

Se­cond wise man

Third wise man

?

White

White

Model

First wise man

Se­cond wise man

Third wise man

?

Black

White

Model

First wise man

Se­cond wise man

Third wise man

?

White

Black

The third wise man de­duces that if the sec­ond wise man had seen a black hat he would have known the colour of his hat, i.e. only the third model would have been pos­si­ble. Since the sec­ond wise man said that he did not know the colour of his hat. The first and sec­ond mod­els are the two pos­si­bil­ities which could be true. In both of these pos­si­bil­ities the third wise man’s hat colour is white. There­fore, the third wise man knows that the colour of his hat is white.

The type of prob­lem above can be gen­er­al­ised to any num­ber (n) as long as there are n white hats and n-1 black hats. Peo­ple of­ten con­sider these prob­lems to be hard. This is due to three main fac­tors:

  • The prob­lems place a con­sid­er­able load on work­ing mem­ory be­cause a rea­soner has to con­struct a model of one per­son’s model of an­other per­son’s model of the situ­a­tion. This can get much harder with prob­lems in­volv­ing more par­ti­ci­pants as each par­ti­ci­pant would need to hold a model of the mod­els of all pre­vi­ous par­ti­ci­pants. In the above ex­am­ple, the last wise man needs to know that model of the first wise man and use this to in­fer the model of the sec­ond wise man.

  • Th­ese types of prob­lems of­ten can only be solved by con­struct­ing and re­tain­ing dis­junc­tive sets of mod­els. We have already cov­ered how dis­junc­tive mod­els are a source of difficulty be­cause of their load on work­ing mem­ory. In this ex­am­ple, the last wise man’s rea­son­ing de­pends on the three mod­els that the sec­ond wise man would have cre­ated due to the first wise man’s re­sponse.

  • The re­cur­sive strat­egy re­quired to solve this prob­lem is not one that peo­ple are likely to use with­out prior train­ing. This is be­cause they have to re­flect upon the prob­lem and dis­cover for them­selves the in­for­ma­tion that is la­tent in each of the wise man’s an­swers.

The above should suffice as an overview of the prin­ci­ples be­hind the men­tal model the­ory and the limi­ta­tions to hu­man rea­son­ing that they pre­dict.

There are a few other as­sump­tion and prin­ci­ples of men­tal mod­els. (The as­sump­tions have be taken from the Men­tal Models and Rea­son­ing web­site). Three of these have already been cov­ered and they are:

  1. The prin­ci­ple of truth (men­tal mod­els rep­re­sent only what is true)

  2. Fo­cus­ing effect (peo­ple in gen­eral fail to make a thor­ough search for al­ter­na­tives when mak­ing de­ci­sions)

  3. The greater the num­ber of al­ter­na­tive mod­els needed, the harder the prob­lem is.

The other as­sump­tions in­clude:

  1. “Each men­tal model rep­re­sents a pos­si­bil­ity, and its struc­ture and con­tent cap­ture what is com­mon to the differ­ent ways in which the pos­si­bil­ity might oc­cur.” (John­son-Laird P. , 1999, p. 116). For ex­am­ple, the premise “There is not a king in the hand, or else there is an ace in the hand” leads to the cre­ation of two mod­els. One for each pos­si­bil­ity (¬ king) and (ace).

  2. “Men­tal mod­els are iconic in­so­far as that is pos­si­ble, i.e., the struc­ture of a model cor­re­sponds to the struc­ture of what it rep­re­sents” (Khem­lani, Lot­stein, & John­son-Laird, 2014, p. 2).A vi­sual image is iconic, but icons can also rep­re­sent states of af­fairs that can­not be vi­su­al­ized, for ex­am­ple nega­tion. (John­son-Laird, 2010) Although, men­tal mod­els are iconic vi­sual imagery is not the same as build­ing a model.So, vi­sual imagery is not a pre­req­ui­site for rea­son­ing. In fact, it can also be a bur­den. “If the con­tent yields vi­sual images that are ir­rele­vant to an in­fer­ence, as it does with vi­sual re­la­tions, rea­son­ing is im­peded and re­li­ably takes longer. The vivid de­tails in the men­tal image in­terfere with think­ing.”(Knauff & John­son-Laird, 2002, p. 370).There are four sets of re­la­tions which lead to men­tal mod­els (Knauff, Fang­meier, Ruff, & John­son-Laird, 2003, p. 560):

    • Vi­su­ospa­tial re­la­tions that are easy to en­visage vi­su­ally and spa­tially, such as ‘‘above’’ and ‘‘be­low’’

    • Vi­sual re­la­tions that are easy to en­visage vi­su­ally but hard to en­visage spa­tially, such as ‘‘cleaner’’ and ‘‘dirt­ier’’

    • Spa­tial re­la­tions that are difficult to en­visage vi­su­ally but easy to en­visage spa­tially, such as ‘‘fur­ther north’’ and ‘‘fur­ther south’’

    • Con­trol re­la­tions that are hard to en­visage both vi­su­ally and spa­tially, such as ‘‘bet­ter’’ and ‘‘worse’’

  3. The men­tal model the­ory gives a ‘dual pro­cess’ ac­count of rea­son­ing. The de­liber­a­tive sys­tem (sys­tem 2) has ac­cess to work­ing mem­ory and so it can carry out re­cur­sive pro­cesses, such as a search for al­ter­na­tive mod­els, and the as­sign­ment of num­bers to in­tu­itive prob­a­bil­ities. The in­tu­itive sys­tem can­not as it does not have ac­cess to work­ing mem­ory and only cope with one model at a time. (John­son-Laird P., 2013, pp. 132-133).

  4. The men­tal model the­ory ex­plains in­duc­tion, de­duc­tion and ab­duc­tion. De­duc­tions are in­fer­ences that ei­ther main­tain or throw away in­for­ma­tion. In­duc­tions are in­fer­ences that in­crease in­for­ma­tion and ab­duc­tion is a spe­cial case of in­duc­tion that that also in­tro­duces new ideas to ex­plain some­thing.

  5. The mean­ings of terms such as ‘if’ and ‘or’ can be mod­u­lated by con­text and knowl­edge.. There are a cou­ple more prin­ci­ples that ap­ply to con­di­tion­als, see (John­son-Laird & Bryne, 2002). An im­por­tant one is the prin­ci­ple of prag­matic mod­u­la­tion which means that the “the con­text of a con­di­tional de­pends on gen­eral knowl­edge in long-term mem­ory and knowl­edge of the spe­cific cir­cum­stances of its ut­ter­ance. This con­text is nor­mally rep­re­sented in ex­plicit mod­els. Th­ese mod­els can mod­u­late the core in­ter­pre­ta­tion of a con­di­tional, tak­ing prece­dence over con­tra­dic­tory mod­els. They can add in­for­ma­tion to mod­els, pre­vent the con­struc­tion of oth­er­wise fea­si­ble mod­els, and aid the pro­cess of con­struct­ing fully ex­plicit mod­els.” (John­son-Laird & Bryne, 2002, p. 659) An ex­am­ple of this prin­ci­ple in ac­tion is that peo­ple will of­ten use their ge­o­graph­i­cal knowl­edge (Stock­holm is a city in Swe­den) to mod­ule the dis­junc­tion: Steve is in Stock­holm or he is in Swe­den. Un­like most other dis­junc­tion, this one yields a definite con­clu­sion that Jay is in Swe­den.

We have pre­vi­ously been deal­ing with sim­ple men­tal mod­els. It is im­por­tant to note that not all men­tal mod­els are sim­ple. In fact, they can be quite com­pli­cated. Below are some of the gen­eral fea­tures of men­tal mod­els in gen­eral.

Po­ten­tially enduring

There is de­bate about whether or not men­tal mod­els are lo­cated in work­ing mem­ory or long-term mem­ory. A pos­si­bil­ity is that hu­mans rea­son by con­struct­ing men­tal mod­els of the situ­a­tion, event and pro­cesses in work­ing mem­ory. Th­ese mod­els are iconic, but leave open the ques­tions of the na­ture of the rep­re­sen­ta­tion in long-term mem­ory. (Ners­es­sian, 2002, p. 143) De­scribing men­tal mod­els as po­ten­tially en­dur­ing is meant to cap­ture the idea that while the minu­tiae or even large parts of a men­tal model may be al­tered, deleted or added the over­all men­tal model or gist can en­dure in mem­ory in some form over years or decades. So, the men­tal model is vague, vari­able and hazy, but can be fleshed out us­ing con­structs in long-term mem­ory us­ing sys­tem 2 pro­cesses.The men­tal model is fuzzy. It is in­com­plete. It is im­pre­cisely stated. Fur­ther­more, within one in­di­vi­d­ual, a men­tal model changes with time and even dur­ing the flow of a sin­gle con­ver­sa­tion.” (For­rester, 1961, p. 213)

Vari­able explicitness

Men­tal mod­els are de­scribed as rel­a­tively ac­cessible be­cause “the na­ture of model ma­nipu­la­tion can range from im­plicit to ex­plicit” (Rouse & Mor­ris, 1985, p. 21) . Men­tal mod­els that are de­scribed as im­plicit are the men­tal model-like struc­tures that are out­side of con­scious aware­ness whereas ex­plicit ones are the op­po­site. Ex­plicit men­tal mod­els are the ones which we have full aware­ness of. They are also of­ten the mod­els that con­sist of the full set of pos­si­bil­ities. Im­plicit mod­els are nor­mally gen­er­ated with sys­tem 1 pro­cesses with­out aware­ness and then later fleshed out into ex­plicit mod­els us­ing sys­tem 2 pro­cesses. So, im­plicit and ex­plicit refers to the method for model ma­nipu­la­tion rather than a static char­ac­ter­is­tic of the model. This defi­ni­tion of a men­tal model cap­tures the essence of what an im­plicit men­tal model is:“Men­tal mod­els are deeply in­grained as­sump­tions, gen­er­al­iza­tions, or even pic­tures or images that in­fluence how we un­der­stand the world and how we take ac­tion. Very of­ten, we are not con­sciously aware of our men­tal mod­els or the effects they have on our be­havi­our.” (Senge, 1990, p. 11)

Limited

Just as real sys­tems vary in size and com­plex­ity, men­tal mod­els do as well. How­ever, due to un­der­ly­ing cog­ni­tive struc­tures of the cur­rent hu­man brain and ex­pected use of men­tal mod­els we can put gen­eral up­per bounds and lower bounds on what we ex­pect this vari­a­tion to be. Th­ese bounds are driven by fac­tors such as:

  1. The re­quire­ment for men­tal mod­els to be prag­matic. This re­quire­ment leads to men­tal mod­els be­ing sim­plifi­ca­tions of the sys­tems that they rep­re­sent. If they were a 1:1 model, then they would not be use­ful as you could just sim­ply in­ter­act with the sys­tem it­self. This means that men­tal model must, by ne­ces­sity, be miss­ing some in­for­ma­tion about the sys­tem.

  2. Bounded ra­tio­nal­ity. “The num­ber of vari­ables [peo­ple] can in fact prop­erly re­late to one an­other is very limited. The in­tu­itive judg­ment of even a skil­led in­ves­ti­ga­tor is quite un­re­li­able in an­ti­ci­pat­ing the dy­namic be­hav­ior of a sim­ple in­for­ma­tion-feed­back sys­tem of per­haps five or six vari­ables.” (For­rester, 1994, p. 60) A men­tal model needs to be small enough to be able to be im­ple­mented in short term mem­ory which is gen­er­ally con­sid­ered to have a ca­pac­ity of two num­bers less than 10 “chunks” of in­for­ma­tion(Shiffrin & Nosofsky, 1994, p. 360) . This limit is flex­ible in the sense that the amount of in­for­ma­tion that can be or­ga­nized mean­ingfully into a chunk can in­crease with ex­pe­rience and ex­per­tise. A rea­son­able lower bound is two vari­ables and two causal re­la­tion­ships. Any­thing less than this, e.g. a sin­gle cau­sa­tion such as “If X in­creases, Y in­creases”, should be called an as­sump­tion or be­lief, de­pend­ing on con­fi­dence level, rather than a model.

  3. Peo­ple are gen­er­ally poor in de­vel­op­ing men­tal mod­els that han­dle un­in­tu­itive con­cepts. “Peo­ple gen­er­ally adopt an event-based, ‘open-loop’ view of causal­ity, ig­nore feed­back pro­cesses, fail to ap­pre­ci­ate time de­lays be­tween ac­tion and re­sponse and in the re­port­ing of in­for­ma­tion, do not un­der­stand stocks and flows, and are in­sen­si­tive to non­lin­ear­i­ties which may al­ter the strengths of differ­ent feed­back loops as a sys­tem evolves. ” (Paich & Ster­man, 1993, p. 3)

The term limited is not meant to im­ply that men­tal mod­els can’t be com­plex or have sig­nifi­cant effects on our rea­son­ing and be­havi­our.

Men­tal mod­els can be sim­ple gen­er­al­iza­tions such as “peo­ple are un­trust­wor­thy,” or they can be com­plex the­o­ries, such as my as­sump­tions about why mem­bers of my fam­ily in­ter­act as they do. But what is most im­por­tant to grasp is that men­tal mod­els are ac­tive— they shape how we act. If we be­lieve peo­ple are un­trust­wor­thy, we act differ­ently from the way we would if we be­lieved they were trust­wor­thy.[...] Two peo­ple with differ­ent men­tal mod­els can ob­serve the same event and de­scribe it differ­ently, be­cause they’ve looked at differ­ent de­tails.(Senge, 1990, p. 160)

They de­ter­mine how effec­tively we in­ter­act with the sys­tems they model

It has been pur­ported that faulty men­tal mod­els were one of the main fac­tors lead­ing to the de­lay in evac­u­at­ing in­hab­itants of the nearby town of the Ch­er­nobyl ex­plo­sion (John­son-Laird P. , 1994, pp. 199-200) . The en­g­ineers in charge at Ch­er­nobyl in­ferred ini­tially that the ex­plo­sion had not de­stroyed the re­ac­tor. Such an event was un­think­able from their pre­vi­ous ex­pe­rience, and they had no ev­i­dence to sup­pose that it had oc­curred. The fol­low­ing prob­lems with how peo­ple use their men­tal mod­els have been found (Nor­man, 1983, pp. 8-11) :

  • Peo­ple have limited abil­ities to “run” their men­tal mod­els. They have limited men­tal op­er­a­tions that they can com­plete. There­fore, in gen­eral the harder some­thing is to use the less it will be used if there are eas­ier al­ter­na­tives available.

  • Men­tal mod­els are un­sta­ble. Peo­ple for­get the de­tails of the sys­tem. This hap­pens fre­quently when those de­tails or the whole sys­tem has not been used for some time. Men­tal mod­els are also dy­namic. They are always be­ing up­dated.

  • Men­tal mod­els do not have firm bound­aries: similar de­vices and op­er­a­tions can get con­fused with each other.

  • Men­tal mod­els are “un­scien­tific” in the sense that peo­ple of­ten main­tain su­per­sti­tions. That is, be­havi­our pat­terns which even though not needed are con­tinued be­cause they cause lit­tle in phys­i­cal effort, but save in men­tal effort or com­fort. The rea­son for this is that peo­ple of­ten have un­cer­tainty as to mechanism, but ex­pe­rience with ac­tions and out­comes. For ex­am­ple, when us­ing a calcu­la­tor peo­ple may press the clear but­ton more than is nec­es­sary be­cause in the past non-clear­ing has re­sulted in prob­lems.

  • Men­tal mod­els are par­si­mo­nious: of­ten peo­ple do ex­tra phys­i­cal op­er­a­tions rather than men­tal plan­ning that would al­low them to avoid those ac­tions; they are will­ing to trade-off ex­tra phys­i­cal ac­tions for re­duced men­tal com­plex­ity. This is es­pe­cially true where the ex­tra ac­tions al­low for one sim­plified rule to be ap­plied across mul­ti­ple sys­tems. Thus, min­i­miz­ing the chances for con­fu­sion.

Internal

The term men­tal model is meant to make men­tal mod­els dis­tinct from con­cep­tual mod­els. Men­tal mod­els are cog­ni­tive phe­nom­ena that ex­ist only in the mind. Con­cep­tual mod­els are rep­re­sen­ta­tions or tools used to de­scribe sys­tems.

When con­sid­er­ing men­tal mod­els four differ­ent com­po­nents need to be un­der­stood (Nor­man, 1983, pp. 7-8) :

  • Tar­get sys­tem—the sys­tem that the per­son is learn­ing or using

  • Con­cep­tual model of the tar­get sys­tem- is in­vented to provide an ap­pro­pri­ate rep­re­sen­ta­tion of the tar­get sys­tem. Ap­pro­pri­ate in this sense means ac­cu­rate, con­sis­tent and com­plete. Con­cep­tual mod­els are in­vented by teach­ers, de­sign­ers, sci­en­tists and en­g­ineers. An ex­am­ple would be a de­sign doc­u­ment which users would use to up­date their men­tal mod­els so that they are ac­cu­rate. Con­cep­tual mod­els are tools de­vised for the pur­pose of un­der­stand or teach­ing of sys­tems. Men­tal mod­els are what peo­ple ac­tu­ally have in their heads and what guides them in their in­ter­ac­tion with the tar­get sys­tem.

  • User’s men­tal model of the tar­get sys­tem—men­tal mod­els are nat­u­rally evolv­ing and will of­ten be up­dated af­ter in­ter­ac­tions with the sys­tem. That is, as you in­ter­act with the sys­tem you learn about how it works. The men­tal model may not be nec­es­sar­ily tech­ni­cally ac­cu­rate and at the end of the day it of­ten can­not be be­cause a model is only use­ful be­cause it is a sim­plifi­ca­tion of the sys­tem it rep­re­sents. Men­tal mod­els are con­strained or built us­ing in­for­ma­tion from the user’s tech­ni­cal back­ground, pre­vi­ous ex­pe­rience with similar sys­tems and the struc­ture of the hu­man in­for­ma­tion pro­cess­ing sys­tem.

  • Scien­tist’s con­cep­tu­al­iza­tion of a men­tal model—is a model of the sup­posed men­tal model

Con­cep­tu­al­is­ing an­other per­son’s men­tal mod­els can be difficult in ex­plain­ing what is in­volved and how to do it well. The fol­low­ing sym­bols will be used:

  • t—rep­re­sents the tar­get system

  • C(t) - rep­re­sents the con­cep­tual model of the tar­get system

  • M(t) - the users men­tal model of the tar­get system

  • C(M(t)) - the ac­tual men­tal model that we think the user might have.

Three func­tional fac­tors that ap­ply to M(t) and C(M(t)) are (Nor­man, 1983, p. 12):

  • Belief Sys­tem: A per­son’s men­tal model re­flects his or her be­liefs about the phys­i­cal sys­tem ac­quired ei­ther through ob­ser­va­tion, in­struc­tion or in­fer­ence. C(M(t)) should con­tain the rele­vant parts of a per­son’s be­lief sys­tem. This means that C(M(t)) should cor­re­spond with how the per­son be­lieves t to be, which may not nec­es­sar­ily be how it ac­tu­ally is.

  • Ob­serv­abil­ity: C(M(t) should cor­re­spond with the pa­ram­e­ters and states of t that the per­son’s M(t) can ob­serve or in­fer. If pre­sum­ablyaper­son does not know about some as­pect of t that it should not be in M(t)

  • Pre­dic­tive Power: the pur­pose of a men­tal model is to al­low the per­son to un­der­stand and to an­ti­ci­pate the be­havi­our of a phys­i­cal sys­tem. It is for this rea­son that we can say that men­tal mod­els have pre­dic­tive power. Peo­ple can “run” their mod­els men­tally. There­fore, C(M(t)) should also in­clude the rele­vant in­for­ma­tion pro­cess­ing and knowl­edge struc­tures that make it pos­si­ble for the per­son to use M(t) to pre­dict and un­der­stand t. Pre­dic­tion is one of the ma­jor as­pects of one’s men­tal mod­els and this must be cap­tured in any de­scrip­tion of them.

References

  • Doyle, J., & Ford, D. (1998). Men­tal Models Con­cepts for Sys­tem Dy­nam­ics Re­search.

  • For­rester, J. (1961). In­dus­trial Dy­nam­ics. New York: Wiley.

  • For­rester, J. (1994). Poli­cies, de­ci­sions, and in­for­ma­tion sources for mod­el­ing. In J. More­croft, & J. Ster­man, Model­ing for Learn­ing. Port­land: Pro­duc­tivity Press.

  • Gold­varg, Y., & John­son-Laird, P. (2000). Illu­sions in modal rea­son­ing. Me­mory & Cog­ni­tion, 282-294.

  • John­son-Laird. (2010). Men­tal mod­els and hu­man rea­son­ing. PNAS.

  • John­son-Laird, P. (1994). Men­tal mod­els and prob­a­bil­is­tic think­ing. Cog­ni­tion, 189-209.

  • John­son-Laird, P. (1999). De­duc­tive Rea­son­ing. An­nual Re­views, 109-135.

  • John­son-Laird, P. (2013). Men­tal mod­els and cog­ni­tive change. Cog­ni­tive Psy­chol­ogy, 131-138.

  • John­son-Laird, P., & Bell, V. (1998). A Model The­ory of Mo­dal Rea­son­ing. Cog­ni­tive Science, 25-51.

  • John­son-Laird, P., & Bryne, R. (2002). Con­di­tion­als: A The­ory of Mean­ing, Prag­mat­ics, and In­fer­ence. Psy­cholog­i­cal Re­view, 646-678.

  • John­son-Laird, P., & Savary, F. (1996). Illu­sory in­fer­ences about prob­a­bil­ities. Acta Psy­cholog­ica, 69-90.

  • John­son-Laird, P., Girotto, V., & Le­grenzi, P. (2005, 3 18). http://​​www.si.umich.edu/​​ICOS/​​gen­tlein­tro.html Retrieved from http://​​www.si.umich.edu: http://​​www.si.umich.edu/​​ICOS/​​gen­tlein­tro.html. Se­cond link as first no longer works: http://​​mu­sicweb.ucsd.edu/​​~sdub­nov/​​Mu206/​​Men­talModels.pdf

  • Khem­lani, S., Lot­stein, M., & John­son-Laird, P. (2014). A men­tal model the­ory of set mem­ber­ship. Wash­ing­ton: US Naval Re­search Lab­o­ra­tory.

  • Knauff, M., & John­son-Laird, P. (2002). Vi­sual imagery can im­pede rea­son­ing. Me­mory & Cog­ni­tion, 363-371.

  • Knauff, M., Fang­meier, T., Ruff, C., & John­son-Laird, P. (2003). Rea­son­ing, Models, and Images: Be­hav­ioral Mea­sures and Cor­ti­cal Ac­tivity. Cog­ni­tive Neu­ro­science, 559–573.

  • Le­grenzi, P., Girotto, V., & John­son-Laird, P. (1993). Fo­cussing in rea­son­ing and de­ci­sion mak­ing. Cog­ni­tion, 37-66.

  • Ners­es­sian, N. (2002). The cog­ni­tive ba­sis of model-based rea­son­ing in sci­ence. In P. Car­ruthers, S. Stich, & M. Sie­gal, The Cog­ni­tive Ba­sis of Science (pp. 133-153). Cam­bridge Univer­sity Press.

  • Nor­man, D. (1983). Men­tal Models. In D. Gent­ner, & A. Stevens, Men­tal Models (pp. 7-14). LEA.

  • Paich, M., & Ster­man, J. (1993). Mar­ket Growth, Col­lapse and Failures to Learn from In­ter­ac­tive Si­mu­la­tion Games. Sys­tem Dy­nam­ics, 1439-1458.

  • Rouse, W., & Mor­ris, N. (1985). on look­ing into the black box: prospects and limits in the search for men­tal mod­els. Nor­cross.

  • Schaeken, W., John­son-Laird, P., & d’Yde­walle, G. (1994). Men­tal mod­els and tem­po­ral rea­son­ing. Cog­ni­tion, 205-334.

  • Senge, P. (1990). The Fifth Dis­ci­pline: The Art and Prac­tice of the Learn­ing Or­ga­ni­za­tion. New York: Dou­ble­day.

  • Shafir, E., & Tver­sky, A. (1992). Think­ing through un­cer­tainty: non­con­se­quen­tial rea­son­ing and choice. Cog­ni­tive Psy­chol­ogy, 449-474.

  • Shiffrin, R., & Nosofsky, R. (1994). Seven Plus or Minus Two: A com­men­tary On Ca­pac­ity Limi­ta­tions. Psy­cholog­i­cal Re­view, 357-361.

  • Yang, Y., & John­son-Laird, P. (2000). Illu­sions in quan­tified rea­son­ing: How to make the im­pos­si­ble seem pos­si­ble, and vice versa. Me­mory & Cog­ni­tion, 452-465.