An Introduction to Decision Modeling

(Cross-posted from Medium.)

De­ci­sion-mak­ing is life. Over time, our de­ci­sions carve an iden­tity for our­selves and our or­ga­ni­za­tions, and it is our de­ci­sions, more than any­thing else, that de­ter­mine how we are re­mem­bered af­ter we’re gone. De­spite their im­por­tance, though, we barely pay at­ten­tion to most of the de­ci­sions we make. Biol­ogy has pro­grammed in us a pow­er­ful in­stinct to make de­ci­sions us­ing our in­tu­itions rather than our con­scious selves when­ever pos­si­ble. There are good rea­sons for this; if we had to think about ev­ery lit­tle de­ci­sion we made, we’d never get any­thing done. But for all its ad­van­tages, the worst thing about in­tu­ition is that it’s al­most im­pos­si­ble for us to ig­nore — even when it’s clearly lead­ing us astray.

Scien­tists have demon­strated that in­tu­ition is best suited to situ­a­tions that we’ve seen hun­dreds or even thou­sands of times be­fore — con­texts where we’ve had a lot of prac­tice and clear and ac­cu­rate feed­back on how well our pre­vi­ous de­ci­sions worked out. That’s great for de­ci­sions like how much to press the brake pedal when you see a stop sign com­ing up. The most im­por­tant de­ci­sions in our lives, though, al­most never fit this pat­tern. Their im­por­tance and high stakes al­most by defi­ni­tion make them rare and un­fa­mil­iar, which is why many of us feel flum­moxed in situ­a­tions like these. Gen­er­ally, we’ll re­spond in one of two ways. The more cau­tious among us are acutely aware of the stakes. Our anx­iety lev­els go up, we turn to friends and col­leagues for ad­vice, and in or­ga­ni­za­tional con­texts, we sched­ule meet­ing af­ter meet­ing in hopes of re­solv­ing the dilemma (or bet­ter yet, get­ting some­one else to re­solve it for us). Others of us con­fi­dently choose a path for­ward, but with a false cer­tainty rooted in the fan­tasy that we un­der­stand our world bet­ter than we ac­tu­ally do. We avoid anal­y­sis paral­y­sis, but greatly in­crease the chance of lead­ing our­selves and oth­ers down the road to dis­aster.

Nei­ther of these re­sponses are much help to us in mak­ing bet­ter de­ci­sions, be­cause nei­ther of them ad­dress the core is­sue. Com­plex de­ci­sions re­quire us to com­pare the like­li­hood and de­sir­a­bil­ity of many pos­si­ble fu­tures on mul­ti­ple, dis­parate, and of­ten con­flict­ing crite­ria. That’s some­thing our in­tu­itions just aren’t nat­u­rally equipped to do. So long as our de­ci­sion-mak­ing strate­gies don’t ad­dress this core prob­lem, they are doomed to fail us more of­ten than we’d like.

Thank­fully, there is a bet­ter way. The se­cret to re­solv­ing com­plex, risky dilem­mas with jus­tified ease and con­fi­dence is to model your de­ci­sions ex­plic­itly. Our in­tu­itions aren’t able to do this on their own, but for­tu­nately, mod­ern com­put­ing tech­nol­ogy is more than up to the task. That’s why I like to think of de­ci­sion mod­el­ing as a kind of tech­nol­ogy-en­hanced de­ci­sion-mak­ing. Un­like with full-on ar­tifi­cial in­tel­li­gence, we are not ask­ing com­put­ers to make our de­ci­sions for us. Rather, we are lev­er­ag­ing the power of com­put­ers to do what we hu­mans can’t do well, free­ing our minds to con­cen­trate on what we’re ac­tu­ally good at. At its best, mod­el­ing our de­ci­sions can help us make the very hu­man ex­er­cise of de­ci­sion-mak­ing not only more likely to lead to the out­comes we want, but more in­stinc­tively satis­fy­ing as well.

Vax to the Max: A Grant­mak­ing Case Study

So how does it work? Let’s say you run a grant pro­gram and you’re de­cid­ing whether or not to ap­prove a grant pro­posal. To keep things sim­ple for this ex­am­ple (don’t worry, I’ll get to more com­pli­cated ap­pli­ca­tions later), we’ll as­sume that there’s only one goal of your pro­gram at this par­tic­u­lar mo­ment: to de­liver life-sav­ing vac­cines. Most of the or­ga­ni­za­tions cur­rently in your grant port­fo­lio fo­cus on di­rect ser­vice de­liv­ery, do­ing good work but at mod­est scale. But the prospec­tive ap­pli­cant in front of you — let’s call them Vax to the Max — is propos­ing an in­trigu­ing new strat­egy, one that offers tremen­dous up­side: ad­vo­cacy. By get­ting the gov­ern­ment in­volved to provide ap­pro­pri­ate in­cen­tives and fund­ing, the the­ory goes, the pro­ject could usher in a new wave of vac­ci­na­tions that no cur­rent grantee is able to promise un­der the ex­ist­ing sys­tem.

Vax to the Max’s grant pro­posal claims that this new strat­egy will re­sult in 50,000 new vac­ci­na­tions. Should you take that num­ber at face value? The an­swer is prob­a­bly not. For one thing, of course, the ap­pli­cant has a strong in­cen­tive to provide you with an op­ti­mistic pic­ture of its pro­jected im­pact. But even as­sum­ing that es­ti­mate isn’t bi­ased at all, there’s an­other prob­lem, which is that it’s just one num­ber. To re­ally do mod­el­ing right, we need to think in terms of the prob­a­bil­ities of differ­ent out­comes. Sure, there could be 50,000 vac­ci­na­tions…but one could eas­ily imag­ine 25,000 or 40,000 or maybe even 60,000 in­stead. It’s im­pos­si­ble to know for sure in ad­vance, so we have no choice but to do some guess­work.

Speci­fi­cally, to get a han­dle on all these pos­si­bil­ities, we want to es­ti­mate a con­fi­dence in­ter­val for the num­ber of new vac­ci­na­tions. For this ex­am­ple, we’ll use a 90% con­fi­dence in­ter­val — i.e., you think it’s 95% likely that the true num­ber of new vac­ci­na­tions will be above some amount and 95% likely that it will be be­low some other amount. You can (and should) train your­self to get good at these kinds of es­ti­mates via a fun men­tal ex­er­cise called cal­ibrated prob­a­bil­ity as­sess­ment, or cal­ibra­tion for short. But for a first ap­prox­i­ma­tion, try ask­ing your­self this ques­tion: what is the biggest (or small­est num­ber) I could imag­ine that’s still tech­ni­cally pos­si­ble?

Let’s say you’ve done that ex­er­cise and de­ter­mined that you’re 90% sure the num­ber of new vac­ci­na­tions made pos­si­ble by the policy changes, if en­acted, is be­tween 100 and 60,000. That’s a huge range! But this is the sort of thing that’s gen­uinely re­ally hard to pre­dict, so we want to be care­ful not to be over­con­fi­dent.

You’ll no­tice in the screen­shot that there’s an image of some­thing that looks like a lop­sided bell curve on the bot­tom right. That’s be­cause the soft­ware I’m us­ing (Guessti­mate) calcu­lates a Monte Carlo simu­la­tion for this es­ti­mate right there in the model. Monte Carlo simu­la­tion is a statis­ti­cal tech­nique that ran­domly gen­er­ates thou­sands of sce­nar­ios from the in­for­ma­tion you feed the model. Origi­nally de­vel­oped by nu­clear physi­cists, it’s now used to aid de­ci­sion-mak­ing in ev­ery­thing from poli­tics to sports and be­yond. For our pur­poses, you can think of a Monte Carlo simu­la­tion as a sam­pling of the pos­si­ble fu­ture lives that might un­fold for you and your or­ga­ni­za­tion as a re­sult of your de­ci­sion. The num­ber in large font (16K) is the av­er­age of the val­ues across all of the simu­la­tions.

Woohoo, 16,000 new vac­ci­na­tions! But hold up — there are some other things we need to take into ac­count here. For one thing, you’ve never worked with this or­ga­ni­za­tion be­fore, and let’s just say you have less than com­plete con­fi­dence that its lead­ers can fol­low through on their com­mit­ments. Per­haps more im­por­tantly, this a com­plex space you’re all work­ing in. Even if Vax to the Max does a brilli­ant job ex­e­cut­ing on its strat­egy, there’s no guaran­tee that it will ac­tu­ally re­sult in any policy changes. And if the changes are en­acted, it might not be be­cause of any­thing Vax to the Max did — per­haps an­other or­ga­ni­za­tion’s work or broader cul­tural shifts will have been more de­ci­sive fac­tors.

Let’s put all of this into the model. To cap­ture the con­tri­bu­tion Vax to the Max would make to the ad­vo­cacy effort, we can es­ti­mate the like­li­hood of the new poli­cies be­ing en­acted with a faith­ful ex­e­cu­tion of the pro­posed strat­egy and with­out that ex­e­cu­tion. Thus, we are defin­ing the im­pact of Vax to the Max’s work as the in­crease in the odds of those poli­cies com­ing to fruition if it fol­lows through on its com­mit­ments — in this case, a dou­bling of those odds from 5% to 10%. We can fur­ther es­ti­mate the prob­a­bil­ity that Vax to the Max will fol­low through on its strat­egy as de­scribed. (We’ll as­sume for now that they’ll only at­tempt the pro­ject if you fund their pro­posal in full.)

Put­ting all of this to­gether re­sults in an es­ti­mate of 470 new vac­ci­na­tions, on av­er­age, as a di­rect re­sult of fund­ing the pro­posal. That’s a lot less than 16,000, but at least it’s more than zero!

We’re not quite done, though, be­cause if you don’t fund this pro­posal, it’s not like the money you would have spent on it goes away. You’ll still have it available to you and you could do some­thing else with it in­stead. So what would that be?

Here’s where it’s a re­ally good idea to have a sense of what your “de­fault” op­tion is. In this case, per­haps that means offer­ing an­other round of fund­ing to one of your cur­rent grantees that’s up for re­newal. Let’s call these folks Max­ine’s Vac­cines. They’re not one of your star perform­ers — you wouldn’t be think­ing about drop­ping them from the port­fo­lio if they were — but they do solid, re­li­able work that con­tributes in an in­cre­men­tal way to the goals of your pro­gram. You are one of their biggest fun­ders, so failing to re­new the grant could definitely force the or­ga­ni­za­tion to cut back its ac­tivi­ties, though it’s pos­si­ble its lead­ers could find a way to re­place the fund­ing.

Okay, so we need a vari­able for the vac­ci­na­tions that Max­ine’s Vac­cines would be able to de­liver with the help of a re­newal grant. We should also es­ti­mate the chance that they might be able to per­suade an­other donor to fill the gap if the grant is not re­newed. Fi­nally, similar to the last ex­am­ple, we should also es­ti­mate what would hap­pen if Max­ine’s Vac­cines does not get the grant and can­not fill the gap. Would they shut down the or­ga­ni­za­tion or the vac­ci­na­tion pro­gram en­tirely? Maybe not. Lots of or­ga­ni­za­tions when faced with fi­nan­cial difficul­ties will choose to scale down rather than close up shop en­tirely, es­pe­cially when there are still com­mit­ted sources of fund­ing. So that un­cer­tainty should be re­flected in our es­ti­mates as well.

Which grant op­por­tu­nity is likely to re­sult in the most vac­ci­na­tions? It’s not im­me­di­ately ob­vi­ous, and if you were try­ing to make this call in­tu­itively it would have to in­volve a lot of guess­work. For­tu­nately, this is the sort of situ­a­tion where mod­el­ing the prob­lem can make things a lot eas­ier.

With the in­for­ma­tion we’ve put into the model so far, we now have an es­ti­mate of the num­ber of new vac­ci­na­tions from the two op­tions to com­pare side by side — the mod­el­ing mo­ment of truth. Maybe it’s just be­cause I’m a huge nerd, but for me this is the most mag­i­cal part of build­ing a de­ci­sion model. There’s a visceral, “that’s so fuck­ing cool!” ex­cite­ment in see­ing the big re­veal, be­cause un­like with many re­search and anal­y­sis pro­jects, this tech­nique ac­tu­ally gives you a di­rect and straight­for­ward an­swer to the ques­tion fore­most on a de­ci­sion-maker’s mind: what should I do next?

As it turns out, with the as­sump­tions we’ve given it, the model thinks your next move should be to call up Max­ine’s Vac­cines to tell them you’re re­new­ing their grant. Vax to the Max has a com­pel­ling story to offer, but the cu­mu­la­tive im­pact of the ques­tion marks means that fund­ing them will most likely mean fewer peo­ple will be vac­ci­nated over­all.

Here’s the full, live ver­sion of the model if you’d like to play with it fur­ther. Note that the model is re-run with new simu­la­tions each time you open it, so the num­bers may be slightly differ­ent from the screen­shots above.

Now, is this the end of the story? It de­pends. If you feel com­fortable mak­ing the de­ci­sion with the in­for­ma­tion you have available, that’s fine. Just break­ing down the situ­a­tion con­cretely like this is already a big im­prove­ment over try­ing to eye­ball your way through it. But the real po­ten­tial of this method lies in the fact that, if the stakes are high enough, you can use the model to help you come up with tar­geted re­search strate­gies to try to nar­row your range of un­cer­tainty for some of these vari­ables so that you can move for­ward even more con­fi­dently. We’ll talk about how to do that in an­other in­stal­l­ment.

So there you have it! I should note that I in­ten­tion­ally kept this de­ci­sion model pretty ba­sic for the sake of clar­ity, so if you no­ticed things about it that seem in­com­plete or not to­tally true-to-life, that’s prob­a­bly why. For in­stance, we could have con­tem­plated a multi-year time span, op­ti­mized for more than one goal, worked with ob­jec­tives that are harder to quan­tify and mea­sure, looked at differ­ent types of prob­a­bil­ity dis­tri­bu­tions, and more. I’ll try to cover some of these ideas in fu­ture ar­ti­cles, but in gen­eral a good rule of thumb is that if your model isn’t so­phis­ti­cated enough to do the job, there’s prob­a­bly a lot you can do to im­prove it that you may not have thought about. It may well be the case that you’ll get more mileage from keep­ing at it than just giv­ing up and mak­ing the de­ci­sion the old way.

In the mean­time, hope­fully this gives you a taste of what’s pos­si­ble with this kind of method­ol­ogy, and why it can be so helpful in situ­a­tions where our in­tu­itions aren’t giv­ing us a clear an­swer. For com­plex dilem­mas, de­ci­sion mod­el­ing al­lows for much more ac­cu­rate es­ti­mates of how all the differ­ent fac­tors are likely to in­ter­act with one an­other, en­abling you to tran­scend the limi­ta­tions of your in­tu­ition. And it also re­minds us that de­ci­sion-mak­ing is an ex­er­cise in nav­i­gat­ing un­cer­tainty, and while we’ll never be able to rid our­selves of that un­cer­tainty al­to­gether, there are tools available to us to smooth the jour­ney.