It’s easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to “trust and think at the same time”.
Are either of those the flaw he found?
What we have to go on are “fairly inexcusable” and “affects one of the conclusions”. I’m not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here’s an attempt.
Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
The core of the cluster may be a breaking of “core emotional trust”.
We can spell out a vivid model where “core emotional trust” is blocking some people from advancing, and “core emotional trust” prevents a skill/activity called “lonely dissent”, and “lonely dissent” is crucial.
We can have (harmful, limiting) “core emotional trust” in science (and this example enriches our picture of it, and our picture of how much pretty obvious good “lonely dissent” can do).
There is no (known) human or mathematical system which is good (excusable, okay, safe) to put “core emotional trust” in.
“Core Emotional Trust” can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.
More proposed flaws I thought of while spelling out the above:
An Orthodox Jewish background “seems to have had the same effect”, but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
The post uses details to make possibilities seem more real. “Core emotional trust” is a complex model which is probably wrong somewhere. But, that doesn’t mean it’s entirely useless, and I don’t feel that’s the flaw.
The argument that Bayesianism can’t receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to “So you cannot trust” period is a bit abrupt.
It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different—it prevents “lonely dissent”.
“Core emotional trust” can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.
I suppose that’s given me plenty to think about, and I won’t try to guess the “real” flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn’t read all the existing comments first.)
It’s easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to “trust and think at the same time”.
Are either of those the flaw he found?
What we have to go on are “fairly inexcusable” and “affects one of the conclusions”. I’m not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here’s an attempt.
Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
The core of the cluster may be a breaking of “core emotional trust”.
We can spell out a vivid model where “core emotional trust” is blocking some people from advancing, and “core emotional trust” prevents a skill/activity called “lonely dissent”, and “lonely dissent” is crucial.
We can have (harmful, limiting) “core emotional trust” in science (and this example enriches our picture of it, and our picture of how much pretty obvious good “lonely dissent” can do).
There is no (known) human or mathematical system which is good (excusable, okay, safe) to put “core emotional trust” in.
“Core Emotional Trust” can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.
More proposed flaws I thought of while spelling out the above:
An Orthodox Jewish background “seems to have had the same effect”, but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
The post uses details to make possibilities seem more real. “Core emotional trust” is a complex model which is probably wrong somewhere. But, that doesn’t mean it’s entirely useless, and I don’t feel that’s the flaw.
The argument that Bayesianism can’t receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to “So you cannot trust” period is a bit abrupt.
It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different—it prevents “lonely dissent”.
“Core emotional trust” can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.
I suppose that’s given me plenty to think about, and I won’t try to guess the “real” flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn’t read all the existing comments first.)