Chief Probability Officer

Stanford Professor Sam Savage (also of Probability Management) proposes that large firms appoint a “Chief Probability Officer.” Here is a description from Douglas Hubbard’s How to Measure Anything, ch. 6:

Sam Savage… has some ideas about how to institutionalize the entire process of creating Monte Carlo simulations [for estimating risk].

...His idea is to appoint a chief probability officer (CPO) for the firm. The CPO would be in charge of managing a common library of probability distributions for use by anyone running Monte Carlo simulations. Savage invokes concepts like the Stochastic Information Packet (SIP), a pregenerated set of 100,000 random numbers for a particular value. Sometimes different SIPs would be related. For example, the company’s revenue might be related to national economic growth. A set of SIPs that are generated so they have these correlations are called “SLURPS” (Stochastic Library Units with Relationships Preserved). The CPO would manage SIPs and SLURPs so that users of probability distributions don’t have to reinvent the wheel every time they need to simulate inflation or healthcare costs.

Hubbard adds some of his own ideas to the proposal:

  • Certification of analysts. Right now, there is not a lot of quality control for decision analysis experts. Only actuaries, in their particular specialty of decision analysis, have extensive certification requirements. As for actuaries, certification in decision analysis should eventually be an independent not-for-profit program run by a professional association. Some other professional certifications now partly cover these topics but fall far short in substance in this particular area. For this reason, I began certifying individuals in Applied Information Economics because there was an immediate need for people to be able to prove their skills to potential employers.

  • Certification for calibrated estimators. As we discussed earlier, an uncalibrated estimator has a strong tendency to be overconfident. Any calculation of risk based on his or her estimates will likely be significantly understated. However, a survey I once conducted showed that calibration is almost unheard of among those who build Monte Carlo models professionally, even though a majority used at least some subjective estimates. (About a third surveyed used mostly subjective estimates.) Calibration training will be one of the simplest improvements to risk analysis in an organization.

  • Well-documented procedures and templates for how models are built from the input of various calibrated estimators. It takes some time to smooth out the wrinkles in the process. Most organizations don’t need to start from scratch for every new investment they are analyzing; they can base their work on that of others or at least reuse their own prior models. I’ve executed nearly the same analysis procedure following similar project plans for a wide variety of decision analysis problems from IT security, military logistics, and entertainment industry investments. But when I applied the same method in the same organization on different problems, I often found that certain parts of the model would be similar to parts of earlier models. An insurance company would have several investments that include estimating the impact on “customer retention” and “claims payout ratio.” Manufacturing-related investments would have calculations related to “marginal labor costs per unit” or “average order fulfillment time.” These issues don’t have to be modeled anew for each new investment problem. They are reusable modules in spreadsheets.

  • Adoption of a single automated tool set. [In this book I show] a few of the many tool sets available. You can get as sophisticated as you like, but starting out doesn’t require any more than some good spreadsheet-based tools. I recommend starting simple and adopting more extensive tool sets as the situations demand.