I am a great admirer of Taleb, I’ve read most of his books, and I consider him one of the most important intellectuals of our time. That said, AF is very uneven: it feels more like a rant than a detached, objective meditation on statistical philosophy.
I’ll mention two ideas that I think are relevant to LW. The first is the concept of robustness in the face of theory failure. Taleb believes that history is dominated by Black Swans: events that shatter our best theories of the world. Therefore the naively plausible rationalist strategy of “Figure out the best theory, and act to optimize utility based on this theory” is a recipe for disaster. Systems, people, and organizations achieve a superficial form of efficiency: they seem to do well in the short run, but go bust (=die/collapse/fail) when a Black Swan hits.
Taleb proposes a different strategy: “Compose an ensemble of theories, and pick an action that will do well under every theory in the ensemble.” This action will probably seem to underperform in the short run, but it is much more likely to survive in the long run.
This concept actually has implications for XRisk prevention. Instead of using argumentation and theorizing to pick the most serious XRisk, and then acting to reduce the likelihood of that risk, you should devise a strategy that simultaneously protects against multiple forms of XRisk (the most obvious candidate in my mind is the construction of lunar or Mars colonies).
The second idea is about the ethics of iconoclasm. Taleb believes that in order to thrive, collectives (e.g. societies) must encourage their members to take risks. If many individuals take on risks, many will fail, but those that succeed will contribute to the health and vitality of the collective, thereby enabling it to become anti-Fragile. The ethical tension comes from the fact that risk-taking often seems quite unappealing from the perspective of the individual, compared to the option of staying safe, thinking the same way everyone else thinks, and so on (risk-taking has lower expected utility). So the Talebian hero is the entrepreneur, the artist, the real philosopher; the person who takes a risk by stepping outside the normal ways of thinking and living, and if successful, shares his success with the collective.
Furthermore, in some cases individuals can do the opposite of risk-taking: they can actually secure themselves against risk at the expense of adding risk to the collective—they robustify themselves by fragilizing the collective. Taleb believes that people who do this have a special place in Hell, and he indicts a wide-ranging group of professional archetypes for this crime: academics, journalists, bankers, policy wonks, pundits, and so on. These are people who have no “skin in the game”—they sell their ideas with slick marketing and prestigious credentials, but at the end of the day they have nothing to lose if it turns out the ideas were wrong.
“Compose an ensemble of theories, and pick an action that will do well under every theory in the ensemble.” This action will probably seem to underperform in the short run, but it is much more likely to survive in the long run.
The problem with this strategy is that it may not only underperform in the short term, it may not survive in the short term. If you have competing strategies, the optimized-for-short-term ones might destroy you before you get to demonstrate your robustness.
Taleb believes that in order to thrive, collectives (e.g. societies) must encourage their members to take risks.
I am not sure there is much historical support for this idea.
Obviously we are talking about degrees of risk—both a completely stagnant and a wildly risky societies will fail. I don’t see a pronounced historic trend of iconoclast-friendly societies triumphing over conformist ones. Certainly, some risk-taking is needed, but “more” is not always the right answer.
I am a great admirer of Taleb, I’ve read most of his books, and I consider him one of the most important intellectuals of our time. That said, AF is very uneven: it feels more like a rant than a detached, objective meditation on statistical philosophy.
I’ll mention two ideas that I think are relevant to LW. The first is the concept of robustness in the face of theory failure. Taleb believes that history is dominated by Black Swans: events that shatter our best theories of the world. Therefore the naively plausible rationalist strategy of “Figure out the best theory, and act to optimize utility based on this theory” is a recipe for disaster. Systems, people, and organizations achieve a superficial form of efficiency: they seem to do well in the short run, but go bust (=die/collapse/fail) when a Black Swan hits.
Taleb proposes a different strategy: “Compose an ensemble of theories, and pick an action that will do well under every theory in the ensemble.” This action will probably seem to underperform in the short run, but it is much more likely to survive in the long run.
This concept actually has implications for XRisk prevention. Instead of using argumentation and theorizing to pick the most serious XRisk, and then acting to reduce the likelihood of that risk, you should devise a strategy that simultaneously protects against multiple forms of XRisk (the most obvious candidate in my mind is the construction of lunar or Mars colonies).
The second idea is about the ethics of iconoclasm. Taleb believes that in order to thrive, collectives (e.g. societies) must encourage their members to take risks. If many individuals take on risks, many will fail, but those that succeed will contribute to the health and vitality of the collective, thereby enabling it to become anti-Fragile. The ethical tension comes from the fact that risk-taking often seems quite unappealing from the perspective of the individual, compared to the option of staying safe, thinking the same way everyone else thinks, and so on (risk-taking has lower expected utility). So the Talebian hero is the entrepreneur, the artist, the real philosopher; the person who takes a risk by stepping outside the normal ways of thinking and living, and if successful, shares his success with the collective.
Furthermore, in some cases individuals can do the opposite of risk-taking: they can actually secure themselves against risk at the expense of adding risk to the collective—they robustify themselves by fragilizing the collective. Taleb believes that people who do this have a special place in Hell, and he indicts a wide-ranging group of professional archetypes for this crime: academics, journalists, bankers, policy wonks, pundits, and so on. These are people who have no “skin in the game”—they sell their ideas with slick marketing and prestigious credentials, but at the end of the day they have nothing to lose if it turns out the ideas were wrong.
The problem with this strategy is that it may not only underperform in the short term, it may not survive in the short term. If you have competing strategies, the optimized-for-short-term ones might destroy you before you get to demonstrate your robustness.
I am not sure there is much historical support for this idea.
Obviously we are talking about degrees of risk—both a completely stagnant and a wildly risky societies will fail. I don’t see a pronounced historic trend of iconoclast-friendly societies triumphing over conformist ones. Certainly, some risk-taking is needed, but “more” is not always the right answer.