I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else—or at least that’s what I gathered from his recent crusade against GMOs.
If you measure “returns” in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
the definition of utility … takes care of risk aversion
I am not sure about that. If you’re risk-neutral in utility, you should be indifferent between two fair-coin bets: (1) heads 9 utils, tails 11 utils; (2) heads −90 utils, tails 110 utils. Are you?
I’m pretty strongly cribbing off the end of So8res’s MMEU rejection. Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn’t matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn’t change the appropriate response, if we were to be offered one.
So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement ‘you will receive −90 utilons’ to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it’s factored into the problem statement.
Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn’t be factored into the calculations.
We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
I am not convinced that utilons automagically include everything—it seems to me they wouldn’t be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else’s).
If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
If utilons don’t automagically include everything, I don’t think they’re a useful concept.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
a situation where you are arguing purely about the abstract issue
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
The utility function is generally considered to map to the real numbers
I am not sure of that. Utility functions often map to ranks, for example.
But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
I am not sure of that. Utility functions often map to ranks, for example.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
This is not always true (as Taleb himself points out in The Black Swan): in investing the worst that can happen is you loss all of your principle, the best that can happen is unbounded.
What? He’s crusading against GMOs? Can you give me some references?
I like his writing a lo, but I remember noting the snide way he dismissed doctors who “couldn’t imagine” that there could be medicinal benefit to mother’s milk, as if they were arrogant fools.
My source were his tweets. Sorry if I can’t give anything concrete right now, but “Taleb GMO” apparently gets a lot of hits on google. I didn’t really dive into it, but as I understood it he takes the precautionary principle (the burden of proof of safety is on GMOs, not of danger on opponents) and adds that nobody can ever really know the risks, so the burden of proof hasn’t and can’t be met.
“They’re arrogant fools” seems to be Taleb’s charming way of saying “they don’t agree with me”.
I like him too. I loved The Black Swan and Fooled by Randomness back when I read them. But I realized I didn’t quite grok his epistemology a while back, when I found him debating religion with Dennett, Harris and Hitchens. Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of “science can’t know everything”. (www.youtube.com/watch?v=-hnqo4_X7PE)
I’ve been meaning to ask Less Wrong about Taleb for a while, because this just seems kookish to me, but it’s entirely possible that I just don’t get something.
Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of “science can’t know everything”. (www.youtube.com/watch?v=-hnqo4_X7PE)
“Can’t know” is misses the point. Doesn’t know, is much more about what Taleb speaks about.
Robin Hanson lately wrote a post against being a rationalist. The core of Nassim arguments is to focus your skepticism where it matters.
The cost of mistakenly being a Christian is low. The cost of mistakenly believing that your retirement portfolio is secure is high. According to Taleb people like the New Atheists should spend more of their time on those beliefs that actually matter.
It’s also worth noting that the new atheists aren’t skeptics in the sense that they believe it’s hard to know things. Their books are full of statements of certainity. Taleb on the other hand is a skeptic in that sense.
For him religion also isn’t primarily about believing in God but about following certain rituals. He doesn’t believe in cutting Chelstrons fence with Ockham’s razor.
It’s not self-evident, but the new atheists don’t make a good argument that it has a high cost. Atheist scientists in good standing like Rob Baumeister say that being religious helps with will power.
Being a Mormon correlates with characteristics and therefore Mormon sometimes recognize other Mormons. Scientific investigation found that the use marker of being healthy for doing so and those markers can’t be used for identifying Mormons.
There’s some data that being religious correlates with longevity.
Of course those things aren’t strong evidence that being religious is beneficial, but that’s where Chesterton’s fence comes into play for Taleb. He was born Christian so he stays Christian.
While my given name is Christian, I wasn’t raised a Christian or believed in God at any point in my life and the evidence doesn’t get my to start being a Christian but I do understand Taleb’s position. Taleb doesn’t argue that atheists should become Christians either.
(If there is something called “Chelston’s Fence” (which my searches did not turn up), apologies.)
Chesterton’s Fence isn’t about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can’t see any, and finding out those reasons before countering their actions. In Christianity’s case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity’s incompetence at understanding the universe) that Chesterton’s Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
If Christianity would lower the willpower of it’s members then it would be at a disadvantage in memetic competition against other worldviews that increase willpower.
In Christianity’s case the reasons seem obvious enough
Predicting complex systems like memetic competition over the span of centuries between different memes is very hard. In cognitive psychology experiments frequently invalidate basic intuitions about the human mind.
Trust bootstrapping is certainly one of the functions of religion but it’s not clear that’s bad. Bootstrapping trust is generally a hard problem. Trust makes people cooperate. If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
As far as “antiquity’s incompetence at understanding the universe” goes, understanding the universe is very important to people like the New Atheists but it’s for Taleb it’s not the main thing religion is about. For him it’s about practically following a bunch of rituals such as being at church every Sunday.
If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
I often see this argument from religions themselves or similar sources, not from those opposed to religion. Not this specific argument, but this type of argument—the idea of using the etymology of a word to prove something about the concept represented by the word. As we know or should know, a word’s etymology may not necessarily have much of a connection to what it means or how it is used today. (“malaria” means “bad air” because of the belief that it was caused by that. “terrific” means something that terrifies.)
Also consider that by conservation of expected evidence if the etymology of the word is evidence for your point, if that etymology were to turn out to be false, that would be evidence against your point. Would you consider it to be evidence against your point if somehow that etymology were to be shown false?
Not this specific argument, but this type of argument—the idea of using the etymology of a word to prove something about the concept represented by the word.
In this case the debate is about how people in the past thought about religion. Looking at etymology helps for that purpose. But that not the most important part of my argument.
It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That’s pointless if they want to convince someone who believes in religion1. If they don’t want to argue against a strawman they actually have to switch to talking about religion1.
In general when someone says: “We should do A.”, that person has freedom to define what he means with A. It’s not a matter of searching for Bayesian evidence. It’s a matter of defining a concept. If you want to define A saying: A is a bit like B in regard X and like C in regard Y is quite useful. Looking at etymology can help with that quest.
Overestimating the ability to understand what the other person means is a common failure mode. If you aren’t clear about concepts than looking at evidence to validate concepts isn’t productive.
It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That’s pointless if they want to convince someone who believes in religion1. If they don’t want to argue against a strawman they actually have to switch to talking about religion1.
But you could say that the new atheists do want to argue against what Taleb might call a strawman, because what they’re trying to do really is to argue against religion2. They’re speaking to the public at large, to the audience. Does the audience also not care about the factual claims of religion? If that distinction about the word “religion” is being made, I don’t see why Taleb isn’t the one being accused of trying to redefine it mid-discussion.
Does the audience also not care about the factual claims of religion?
If you look at priorities of most people that they show through their actions, truth isn’t on top of that list. Most people lie quite frequently and optimize for other ends.
Just take any political discussion and see how many people are happy to be correctly informed that their tribal beliefs are wrong. That probably even goes for this discuss and you have a lot of motivated cognition going on that makes you want to believe that people really care about truth.
If that distinction about the word “religion” is being made, I don’t see why Taleb isn’t the one being accused of trying to redefine it mid-discussion.
When speaking on the subject of religion Taleb generally simply speaks about his own motivation for believing what he believes. He doesn’t argue that other people should start believing in religion. Taleb might child people for not being skeptic where it matters but generally not for being atheists.
Nearly any religious person while grant you that some religions are bad. As long as the new atheists argue against a religion that isn’t really his religion he has no reason to change.
I would also add that it’s quite okay when different people hold different beliefs.
I agree with the apparent LW consensus that much of religion is attire, habit, community/socializing, or “belief in belief”, if that’s what you mean. But then again, people actually do care about the big things, like whether God exists, and also about what is or isn’t morally required of them.
I bet they will also take Taleb’s defense as an endorsement of God’s existence and the other factual claims of Christianity. I don’t recall him saying that he’s only a cultural Christian and doesn’t care whether any of it is actually true.
I would also add that it’s quite okay when different people hold different beliefs.
Well, I won’t force anyone to change, but there’s good and bad epistemology.
Also, the kind of Chesterton’s fences that the new atheists are most interested in bringing down aren’t just sitting there, but are actively harmful (and they may be there as a result of people practicing what you called religion1, but their removal is opposed with appeals to religion2).
I don’t recall him saying that he’s only a cultural Christian and doesn’t care whether any of it is actually true.
You take a certain epistemology for granted that Taleb doesn’t share.
Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly.
He doesn’t care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That’s a different kind of caring than I have about pure matter of facts.
One of Taleb’s examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest.
Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing.
We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding.
That’s also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
(thank you for helping me try to understand him on this point, by the way)
This seems coherent. But, to be honest, weak (which could mean I still don’t get it).
We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton’s Fence, and the difficulty of assessing risks on account of Black Swans.
… which would prescribe inaction in any question I can think of. It looks as if we’re not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models.
Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can’t possibly have taken all factors in consideration, so I should stay home, as it will be safer if I’m wrong. There’s no reason to think pursuing GMOs will be dangerous, but that’s only considering all we know, which can’t be enough to meet the burden of proof under the strong precautionary principle. There’s not close to enough evidence to even locate Christianity in hypothesis space, but that’s just intellectual reasoning… We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
Note that probably all crops are “genetically modified” by less technologically advanced methods. I’m not sure if that disproves the criticism or shows that we should be cautious about eating anything.
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.′
No, I’m saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
But then we look, and this turns into “we haven’t looked enough”. Which can be true, so maybe we go “can anyone think of something concrete that can go wrong with this?”, and ideally we will look into that, and try to calculate the expected utility.
But then it becomes “we can’t look enough—no matter how hard we try, it will always be possible that there’s something we missed”.
Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because “it can’t be proven wrong”.
We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
The history here says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries.
That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community.
I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the “cons” column (also on the “pros”), and it may make sense to require extra weight on the “pros” column to account for this, but we shouldn’t be required to act as if the invisible “cons” definitely outweigh all “pros”.
Nassim N. Taleb
Opportunity costs?
I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else—or at least that’s what I gathered from his recent crusade against GMOs.
True, but not as easy to follow as Taleb’s advice. In the extreme we could replace every piece of advice with “maximize your utility”.
Not quite, as most people are risk-averse and care about the width about the distribution of the expected returns, not only about its mean.
If you measure “returns” in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
I am not sure about that. If you’re risk-neutral in utility, you should be indifferent between two fair-coin bets: (1) heads 9 utils, tails 11 utils; (2) heads −90 utils, tails 110 utils. Are you?
Yes, I am, by definition, because the util rewards, being in utilons, must factor in everything I care about, including the potential regret.
Unless your bets don’t cash out as
and
If it means something else, then the precise wording could make the decision different.
It’s not quite the potential regret that is the issue, it is the degree of uncertainty, aka risk.
Do you happen to have any links to a coherent theory of utilons?
I’m pretty strongly cribbing off the end of So8res’s MMEU rejection. Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn’t matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn’t change the appropriate response, if we were to be offered one.
So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement ‘you will receive −90 utilons’ to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it’s factored into the problem statement.
Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn’t be factored into the calculations.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
I am not convinced that utilons automagically include everything—it seems to me they wouldn’t be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else’s).
If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I am not sure of that. Utility functions often map to ranks, for example.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
Um, Wikipedia?
That’s an example of the rank ordering, but not of the first thing I asked for.
The entire concept of utility in Wikipedia is the utility of specific goods, not of world-states.
Hmmm… bet 1, expected utils gained = 10. Bet 2, expected utils gained = 10.
I am not risk-neutral, and so I prefer bet 1; I don’t like the high odds of losing utils in bet 2.
His point is that the upside is bounded much more than the downside.
Yes, but my point is that this is also true for, say, leaving the house to have fun.
This is not always true (as Taleb himself points out in The Black Swan): in investing the worst that can happen is you loss all of your principle, the best that can happen is unbounded.
What? He’s crusading against GMOs? Can you give me some references?
I like his writing a lo, but I remember noting the snide way he dismissed doctors who “couldn’t imagine” that there could be medicinal benefit to mother’s milk, as if they were arrogant fools.
My source were his tweets. Sorry if I can’t give anything concrete right now, but “Taleb GMO” apparently gets a lot of hits on google. I didn’t really dive into it, but as I understood it he takes the precautionary principle (the burden of proof of safety is on GMOs, not of danger on opponents) and adds that nobody can ever really know the risks, so the burden of proof hasn’t and can’t be met.
“They’re arrogant fools” seems to be Taleb’s charming way of saying “they don’t agree with me”.
I like him too. I loved The Black Swan and Fooled by Randomness back when I read them. But I realized I didn’t quite grok his epistemology a while back, when I found him debating religion with Dennett, Harris and Hitchens. Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of “science can’t know everything”. (www.youtube.com/watch?v=-hnqo4_X7PE)
I’ve been meaning to ask Less Wrong about Taleb for a while, because this just seems kookish to me, but it’s entirely possible that I just don’t get something.
I feel like it should be pointed out that being kookish and being a source of valuable insight are not incompatible.
“Can’t know” is misses the point. Doesn’t know, is much more about what Taleb speaks about.
Robin Hanson lately wrote a post against being a rationalist. The core of Nassim arguments is to focus your skepticism where it matters. The cost of mistakenly being a Christian is low. The cost of mistakenly believing that your retirement portfolio is secure is high. According to Taleb people like the New Atheists should spend more of their time on those beliefs that actually matter.
It’s also worth noting that the new atheists aren’t skeptics in the sense that they believe it’s hard to know things. Their books are full of statements of certainity. Taleb on the other hand is a skeptic in that sense.
For him religion also isn’t primarily about believing in God but about following certain rituals. He doesn’t believe in cutting Chelstrons fence with Ockham’s razor.
That’s not self-evident to me at all.
It’s not self-evident, but the new atheists don’t make a good argument that it has a high cost. Atheist scientists in good standing like Rob Baumeister say that being religious helps with will power.
Being a Mormon correlates with characteristics and therefore Mormon sometimes recognize other Mormons. Scientific investigation found that the use marker of being healthy for doing so and those markers can’t be used for identifying Mormons.
There’s some data that being religious correlates with longevity.
Of course those things aren’t strong evidence that being religious is beneficial, but that’s where Chesterton’s fence comes into play for Taleb. He was born Christian so he stays Christian.
While my given name is Christian, I wasn’t raised a Christian or believed in God at any point in my life and the evidence doesn’t get my to start being a Christian but I do understand Taleb’s position. Taleb doesn’t argue that atheists should become Christians either.
(If there is something called “Chelston’s Fence” (which my searches did not turn up), apologies.)
Chesterton’s Fence isn’t about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can’t see any, and finding out those reasons before countering their actions. In Christianity’s case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity’s incompetence at understanding the universe) that Chesterton’s Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
If Christianity would lower the willpower of it’s members then it would be at a disadvantage in memetic competition against other worldviews that increase willpower.
Predicting complex systems like memetic competition over the span of centuries between different memes is very hard. In cognitive psychology experiments frequently invalidate basic intuitions about the human mind.
Trust bootstrapping is certainly one of the functions of religion but it’s not clear that’s bad. Bootstrapping trust is generally a hard problem. Trust makes people cooperate. If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
As far as “antiquity’s incompetence at understanding the universe” goes, understanding the universe is very important to people like the New Atheists but it’s for Taleb it’s not the main thing religion is about. For him it’s about practically following a bunch of rituals such as being at church every Sunday.
I often see this argument from religions themselves or similar sources, not from those opposed to religion. Not this specific argument, but this type of argument—the idea of using the etymology of a word to prove something about the concept represented by the word. As we know or should know, a word’s etymology may not necessarily have much of a connection to what it means or how it is used today. (“malaria” means “bad air” because of the belief that it was caused by that. “terrific” means something that terrifies.)
Also consider that by conservation of expected evidence if the etymology of the word is evidence for your point, if that etymology were to turn out to be false, that would be evidence against your point. Would you consider it to be evidence against your point if somehow that etymology were to be shown false?
In this case the debate is about how people in the past thought about religion. Looking at etymology helps for that purpose. But that not the most important part of my argument.
It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That’s pointless if they want to convince someone who believes in religion1. If they don’t want to argue against a strawman they actually have to switch to talking about religion1.
In general when someone says: “We should do A.”, that person has freedom to define what he means with A. It’s not a matter of searching for Bayesian evidence. It’s a matter of defining a concept. If you want to define A saying: A is a bit like B in regard X and like C in regard Y is quite useful. Looking at etymology can help with that quest.
Overestimating the ability to understand what the other person means is a common failure mode. If you aren’t clear about concepts than looking at evidence to validate concepts isn’t productive.
But you could say that the new atheists do want to argue against what Taleb might call a strawman, because what they’re trying to do really is to argue against religion2. They’re speaking to the public at large, to the audience. Does the audience also not care about the factual claims of religion? If that distinction about the word “religion” is being made, I don’t see why Taleb isn’t the one being accused of trying to redefine it mid-discussion.
If you look at priorities of most people that they show through their actions, truth isn’t on top of that list. Most people lie quite frequently and optimize for other ends.
Just take any political discussion and see how many people are happy to be correctly informed that their tribal beliefs are wrong. That probably even goes for this discuss and you have a lot of motivated cognition going on that makes you want to believe that people really care about truth.
When speaking on the subject of religion Taleb generally simply speaks about his own motivation for believing what he believes. He doesn’t argue that other people should start believing in religion. Taleb might child people for not being skeptic where it matters but generally not for being atheists.
Nearly any religious person while grant you that some religions are bad. As long as the new atheists argue against a religion that isn’t really his religion he has no reason to change.
I would also add that it’s quite okay when different people hold different beliefs.
I agree with the apparent LW consensus that much of religion is attire, habit, community/socializing, or “belief in belief”, if that’s what you mean. But then again, people actually do care about the big things, like whether God exists, and also about what is or isn’t morally required of them.
I bet they will also take Taleb’s defense as an endorsement of God’s existence and the other factual claims of Christianity. I don’t recall him saying that he’s only a cultural Christian and doesn’t care whether any of it is actually true.
Well, I won’t force anyone to change, but there’s good and bad epistemology.
Also, the kind of Chesterton’s fences that the new atheists are most interested in bringing down aren’t just sitting there, but are actively harmful (and they may be there as a result of people practicing what you called religion1, but their removal is opposed with appeals to religion2).
You take a certain epistemology for granted that Taleb doesn’t share.
Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly.
He doesn’t care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That’s a different kind of caring than I have about pure matter of facts.
One of Taleb’s examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest. Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing.
We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding.
That’s also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
(thank you for helping me try to understand him on this point, by the way)
This seems coherent. But, to be honest, weak (which could mean I still don’t get it).
We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton’s Fence, and the difficulty of assessing risks on account of Black Swans.
… which would prescribe inaction in any question I can think of. It looks as if we’re not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models.
Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can’t possibly have taken all factors in consideration, so I should stay home, as it will be safer if I’m wrong. There’s no reason to think pursuing GMOs will be dangerous, but that’s only considering all we know, which can’t be enough to meet the burden of proof under the strong precautionary principle. There’s not close to enough evidence to even locate Christianity in hypothesis space, but that’s just intellectual reasoning… We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
The phrase “no reason to think” should raise alarm bells. It can mean we’ve looked and haven’t found any, or that we haven’t looked.
There’s no reason to think that there’s a teapot-shaped asteroid resembling Russell’s teapot either.
And I’m pretty sure we haven’t looked for one, either. Yet it would be ludicrous to treat it as if it had a substantial probability of existing.
A prior eating most things is a bad idea. Thus the burden is on the GMO advocates to show their products are safe.
Note that probably all crops are “genetically modified” by less technologically advanced methods. I’m not sure if that disproves the criticism or shows that we should be cautious about eating anything.
We should be cautious about eating anything that doesn’t have a track record of being safe.
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.′
No, I’m saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
Yes, this is different from merely “showing that GMO products are safe”. Because we also have the inside view.
I agree with this.
But then we look, and this turns into “we haven’t looked enough”. Which can be true, so maybe we go “can anyone think of something concrete that can go wrong with this?”, and ideally we will look into that, and try to calculate the expected utility.
But then it becomes “we can’t look enough—no matter how hard we try, it will always be possible that there’s something we missed”.
Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because “it can’t be proven wrong”.
HIV.
Its worst impact was and is in Sub-Saharan Africa where the “laws and customs against homosexuality” are fully in place.
The history here says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries.
That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community.
I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the “cons” column (also on the “pros”), and it may make sense to require extra weight on the “pros” column to account for this, but we shouldn’t be required to act as if the invisible “cons” definitely outweigh all “pros”.
This suggests we actually need laws and customs against promiscuity. Or just better public education re STIs.
Sorry for the typo.
I think that Taleb has one really good insight—the Black Swan book—and then he decided to become a fashionable French philosopher...