First, I’m not sure whether the right category here is “gear level explanations”? It’s just that, there is evidence which is so strong, that even when the evidence comes from a biased source, you are still compelled to believe it. In other words, this is the sort of evidence that you expect to be hard to find if the claim is wrong, even if you intentionally go looking for it. In theoretical computer science, this is exactly what a “proof” is: something which can be believed even coming from an adversarial agent.
Second, I think that there is an important difference between convincing yourself and convincing other people, namely, a lot of your intrinsic reasoning is non-verbal intuition. When you explain to another person then, either you are lucky and both of you share the same intuition, or (like often is the case) you don’t. In the latter case you need to introspect harder and find a way to articulate the reasons for this intuition (which is not an easy task: your brain can store a particular intuition without attaching the whole list of examples that generated it).
I agree that there’s a thing going on with “the evidence is so strong that I update significantly even if it is coming from someone’s motivated cognition”, but I think there’s also something more general going on which has to do with gears-level.
If we were perfect Bayesians, then there would be no distinction between “the evidence that made us believe” and “all the evidence we have”. However, we are not perfect bayesians, and logical induction captures some of what’s going on with our bounded rationality.
According to my analysis, gears are parts of our model which are bayesian in that way; we can put weight on them based on all the evidence for and against them, because the models are “precise” in a way which allows us to objectively judge how the evidence bears on them. (Other parts of our beliefs can’t be judged in this way due to the difficulty of overcoming hindsight bias.)
Therefore, filtering our state of evidence through gears-level models allows us to convey evidence which would have moved us if we were more perfectly Bayesian. We are speaking from a model, describing an epistemic state which isn’t actually our own but which is more communicable.
This is all deliciously meta because this in itself is an example of me having some strong intuitions and attempting to put them into a gears-level model to communicate them well. I think there’s a bigger picture which I’m not completely conveying, which has to do with logical induction, bounded rationality, aumann agreement, justification structures, hindsight bias, motivated cognition, and intuitions from the belief propagation algorithm.
Maybe? Is the converse also true? Maybe a “gears” model = a model that resides fully in the conscious, linguistic part of the mind and can be communicated to another person with sufficient precision for em to reproduce its predictions, whereas a “non-gears” model = a model that relies on “opaque” intuition modules?
Two comments.
First, I’m not sure whether the right category here is “gear level explanations”? It’s just that, there is evidence which is so strong, that even when the evidence comes from a biased source, you are still compelled to believe it. In other words, this is the sort of evidence that you expect to be hard to find if the claim is wrong, even if you intentionally go looking for it. In theoretical computer science, this is exactly what a “proof” is: something which can be believed even coming from an adversarial agent.
Second, I think that there is an important difference between convincing yourself and convincing other people, namely, a lot of your intrinsic reasoning is non-verbal intuition. When you explain to another person then, either you are lucky and both of you share the same intuition, or (like often is the case) you don’t. In the latter case you need to introspect harder and find a way to articulate the reasons for this intuition (which is not an easy task: your brain can store a particular intuition without attaching the whole list of examples that generated it).
I agree that there’s a thing going on with “the evidence is so strong that I update significantly even if it is coming from someone’s motivated cognition”, but I think there’s also something more general going on which has to do with gears-level.
If we were perfect Bayesians, then there would be no distinction between “the evidence that made us believe” and “all the evidence we have”. However, we are not perfect bayesians, and logical induction captures some of what’s going on with our bounded rationality.
According to my analysis, gears are parts of our model which are bayesian in that way; we can put weight on them based on all the evidence for and against them, because the models are “precise” in a way which allows us to objectively judge how the evidence bears on them. (Other parts of our beliefs can’t be judged in this way due to the difficulty of overcoming hindsight bias.)
Therefore, filtering our state of evidence through gears-level models allows us to convey evidence which would have moved us if we were more perfectly Bayesian. We are speaking from a model, describing an epistemic state which isn’t actually our own but which is more communicable.
This is all deliciously meta because this in itself is an example of me having some strong intuitions and attempting to put them into a gears-level model to communicate them well. I think there’s a bigger picture which I’m not completely conveying, which has to do with logical induction, bounded rationality, aumann agreement, justification structures, hindsight bias, motivated cognition, and intuitions from the belief propagation algorithm.
I could be wrong here, but, isn’t “intuition” basically “non-gears”? Isn’t “introspect harder” basically “try to turn intuition into gears”?
Maybe? Is the converse also true? Maybe a “gears” model = a model that resides fully in the conscious, linguistic part of the mind and can be communicated to another person with sufficient precision for em to reproduce its predictions, whereas a “non-gears” model = a model that relies on “opaque” intuition modules?