As someone who spent over a decade as an analyst at a boutique firm and who published many forecasts, I agree this is an extremely common and frustrating problem. It’s also not a universal one, but the exceptions are hard to find. Oftentimes competent individuals doing forecasting know this but are pressured by their employers to publish technically-valid-but-misleading reports and headlines—ask me how I know. Other times smart people try to overload their models with data but then get the final result extremely wrong because of one incorrect assumption that they didn’t properly sanity check—ask me how I know.
In my experience, savvy clients already know that whatever numbers a forecaster published, they are not that useful. What is useful, if the forecaster is worth paying attention to, is the opportunity to pick their brain, dig into their assumptions, ask how a change of assumptions affects the outcome. Unsavvy clients fail to appreciate this even when you, the forecaster, repeatedly and directly tell them and try to engage them in such a discussion. Ask me how I know.
For some definitions of some fields, an “accurate” forecast is approximately useless, because aggregate results are strongly predicted by history and constrained by things that are outside the supposed domain under consideration. “This field has continued growing 20-30% a year for 40 years despite multiple paradigm changes” implies your forecast probably should just add up to that, even if all the gears in it are wrong. You could do that on the back of a napkin in five minutes with basic background knowledge, then spend months formalizing it enough to publish. Ask me how I know.
For other domains, an “accurate” forecast is approximately impossible because small changes in assumptions about the course of future events can have wild effects, and all the wildness fails to cancel. If you make a forecast of any kind, people will keep telling you you are or were wrong, no matter what you said after the headline. And if you do have the integrity to say, “Sorry boss, I’m not going to make a numerical headline prediction because the reasonable range of market sizes spans 4 OOMs over the next 15 years” clients lose interest. Ask me how I know.
In both of these cases, the value is, as you say, in identifying the worthwhile questions to ask, people to talk to, and types of assumptions to make, and being willing to put a stake and make a bet based on that. The client still needs to have a clear enough sense of what their own goals and metrics even are in order to get that far, and many don’t.
I definitely agree that better institutional decision making is a bottleneck to a lot of solutions to various problems. I’m generally of the opinion that once you invest on the order of $100k in making a forecast, especially of a technology or market, you hit severe diminishing returns in actual quality (if you’re doing it well), while raising the institutional or psychological need to believe more strongly in the forecast’s output. And frankly I don’t see how most organizations could expect to effectively integrate the insights from the number of people that $100M spent entails are involved in the project—there would be no single forecaster capable of effectively discussing the results, unless the work were highly redundant. Or, in some cases, the intended result of a big expensive forecast is mostly CYA for a pre-ordained conclusion.
Happy to discuss more, here or in DMs or otherwise.
As someone who spent over a decade as an analyst at a boutique firm and who published many forecasts, I agree this is an extremely common and frustrating problem. It’s also not a universal one, but the exceptions are hard to find. Oftentimes competent individuals doing forecasting know this but are pressured by their employers to publish technically-valid-but-misleading reports and headlines—ask me how I know. Other times smart people try to overload their models with data but then get the final result extremely wrong because of one incorrect assumption that they didn’t properly sanity check—ask me how I know.
In my experience, savvy clients already know that whatever numbers a forecaster published, they are not that useful. What is useful, if the forecaster is worth paying attention to, is the opportunity to pick their brain, dig into their assumptions, ask how a change of assumptions affects the outcome. Unsavvy clients fail to appreciate this even when you, the forecaster, repeatedly and directly tell them and try to engage them in such a discussion. Ask me how I know.
For some definitions of some fields, an “accurate” forecast is approximately useless, because aggregate results are strongly predicted by history and constrained by things that are outside the supposed domain under consideration. “This field has continued growing 20-30% a year for 40 years despite multiple paradigm changes” implies your forecast probably should just add up to that, even if all the gears in it are wrong. You could do that on the back of a napkin in five minutes with basic background knowledge, then spend months formalizing it enough to publish. Ask me how I know.
For other domains, an “accurate” forecast is approximately impossible because small changes in assumptions about the course of future events can have wild effects, and all the wildness fails to cancel. If you make a forecast of any kind, people will keep telling you you are or were wrong, no matter what you said after the headline. And if you do have the integrity to say, “Sorry boss, I’m not going to make a numerical headline prediction because the reasonable range of market sizes spans 4 OOMs over the next 15 years” clients lose interest. Ask me how I know.
In both of these cases, the value is, as you say, in identifying the worthwhile questions to ask, people to talk to, and types of assumptions to make, and being willing to put a stake and make a bet based on that. The client still needs to have a clear enough sense of what their own goals and metrics even are in order to get that far, and many don’t.
I definitely agree that better institutional decision making is a bottleneck to a lot of solutions to various problems. I’m generally of the opinion that once you invest on the order of $100k in making a forecast, especially of a technology or market, you hit severe diminishing returns in actual quality (if you’re doing it well), while raising the institutional or psychological need to believe more strongly in the forecast’s output. And frankly I don’t see how most organizations could expect to effectively integrate the insights from the number of people that $ 100M spent entails are involved in the project—there would be no single forecaster capable of effectively discussing the results, unless the work were highly redundant. Or, in some cases, the intended result of a big expensive forecast is mostly CYA for a pre-ordained conclusion.
Happy to discuss more, here or in DMs or otherwise.