Well, yeah, sure. Yvain wrote it up nicely, but the main point—that what the model says and how much do you trust the model itself are quite different things—is not complicated.
To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don’t see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying “no model is good enough” isn’t very useful.
But surely Taleb isn’t saying “no model is good enough.” He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:
We cannot escape it unfortunately in finance, ever since we left the stone-age, our random variables
became more and more complex. We cannot escape it. We can become more robust.
Well, yeah, sure. Yvain wrote it up nicely, but the main point—that what the model says and how much do you trust the model itself are quite different things—is not complicated.
To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don’t see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying “no model is good enough” isn’t very useful.
But surely Taleb isn’t saying “no model is good enough.” He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:
Actually, yes, he is. He is not terribly consistent, but when he goes into his “philosopher” mode he rants against all models.
In fact, his trademark concept of a black swan is precisely what no model can predict.