“That said, I admit that my two books on statistical methods are almost entirely devoted to modeling “white swans.” My only defense here is that Bayesian methods allow us to fully explore the implications of a model, the better to improve it when we find discrepancies with data. Just as a chicken is an egg’s way of making another egg, Bayesian inference is just a theory’s way of uncovering problems with can lead to a better theory. I firmly believe that what makes Bayesian inference really work is a willingness (if not eagerness) to check fit with data and abandon and improve models often.”
That told me that you are making such a crippling interpretation, that you have not read any of the works, and you have no idea what you are saying. If all you got was that statement, you are severely behind. Please tell me why you think antifragility is a packaged name, or whatever you are saying.
I’m seriously cringing at why you even mentioned bayesian in this. It’s so funny, “Bayesians” almost never know probability theory, large deviations, cramer conditions, non-ergodicity etc etc. They know absolutely no extreme value theory.
By the way, assuming your probability distribution via monte-carlo sampling will be robust to time has been demonstrated false over and over again.
anti-fragility is related to jensen’s inequality which is related to a larger class of functional inequalities, and related to information theory.
I’m willing to talk to you this over time and generate a discussion to see what you mean, but from first glance, you have not read any piece of work or have completely misrepresented his work. Statements like that are why people don’t know what they’re saying.
“Bayesian’s” especially people who dostatistics have such a malnourished view on convergence of limit theorems that I wonder what they’re even talking about. It takes too long to establish convergence. Even Jaynes gets that wrong in his book.
This was so dumb. Gelman which is the bayesian statistics textbook Less Wrong recommends from/author of says this
http://andrewgelman.com/2007/04/09/nassim_talebs_t/
“That said, I admit that my two books on statistical methods are almost entirely devoted to modeling “white swans.” My only defense here is that Bayesian methods allow us to fully explore the implications of a model, the better to improve it when we find discrepancies with data. Just as a chicken is an egg’s way of making another egg, Bayesian inference is just a theory’s way of uncovering problems with can lead to a better theory. I firmly believe that what makes Bayesian inference really work is a willingness (if not eagerness) to check fit with data and abandon and improve models often.”
That told me that you are making such a crippling interpretation, that you have not read any of the works, and you have no idea what you are saying. If all you got was that statement, you are severely behind. Please tell me why you think antifragility is a packaged name, or whatever you are saying.
I’m seriously cringing at why you even mentioned bayesian in this. It’s so funny, “Bayesians” almost never know probability theory, large deviations, cramer conditions, non-ergodicity etc etc. They know absolutely no extreme value theory.
By the way, assuming your probability distribution via monte-carlo sampling will be robust to time has been demonstrated false over and over again.
anti-fragility is related to jensen’s inequality which is related to a larger class of functional inequalities, and related to information theory.
I’m willing to talk to you this over time and generate a discussion to see what you mean, but from first glance, you have not read any piece of work or have completely misrepresented his work. Statements like that are why people don’t know what they’re saying.
“Bayesian’s” especially people who dostatistics have such a malnourished view on convergence of limit theorems that I wonder what they’re even talking about. It takes too long to establish convergence. Even Jaynes gets that wrong in his book.