What’s the difference between “based on computation of the odds” and “based on some model”?
Taleb is doing some handwaving here.
“Some model” in this context is just the assumption of a specific probability distribution. So if, for example, you believe that the observation values are normally distributed with the mean of 0 and the standard deviation of 1, the chance of seeing a value greater than 3 (a “three-sigma value”) is 0.13%. The chance of seeing a value greater than 6 (a “six-sigma value”) is 9.87e-10. E.g. if your observations are financial daily returns, you effectively should never ever see a six-sigma value. The issue is that in practice you do see such values, pretty often, too.
The problem with Taleb’s statement is that to estimate the probabilities of seeing certain values in the future necessarily requires some model, even if implicit. Without one you can not do the “computation of the odds” unless you are happy with the conclusion that the probability to see a value you’ve never seen before is zero.
Taleb’s criticism of the default assumption of normality in much of financial analysis is well-founded. But when he starts to rail against models and assumptions in general, he’s being silly.
Well, yeah, sure. Yvain wrote it up nicely, but the main point—that what the model says and how much do you trust the model itself are quite different things—is not complicated.
To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don’t see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying “no model is good enough” isn’t very useful.
But surely Taleb isn’t saying “no model is good enough.” He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:
We cannot escape it unfortunately in finance, ever since we left the stone-age, our random variables
became more and more complex. We cannot escape it. We can become more robust.
Taleb is doing some handwaving here.
“Some model” in this context is just the assumption of a specific probability distribution. So if, for example, you believe that the observation values are normally distributed with the mean of 0 and the standard deviation of 1, the chance of seeing a value greater than 3 (a “three-sigma value”) is 0.13%. The chance of seeing a value greater than 6 (a “six-sigma value”) is 9.87e-10. E.g. if your observations are financial daily returns, you effectively should never ever see a six-sigma value. The issue is that in practice you do see such values, pretty often, too.
The problem with Taleb’s statement is that to estimate the probabilities of seeing certain values in the future necessarily requires some model, even if implicit. Without one you can not do the “computation of the odds” unless you are happy with the conclusion that the probability to see a value you’ve never seen before is zero.
Taleb’s criticism of the default assumption of normality in much of financial analysis is well-founded. But when he starts to rail against models and assumptions in general, he’s being silly.
So, this.
Well, yeah, sure. Yvain wrote it up nicely, but the main point—that what the model says and how much do you trust the model itself are quite different things—is not complicated.
To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don’t see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying “no model is good enough” isn’t very useful.
But surely Taleb isn’t saying “no model is good enough.” He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:
Actually, yes, he is. He is not terribly consistent, but when he goes into his “philosopher” mode he rants against all models.
In fact, his trademark concept of a black swan is precisely what no model can predict.