To me too, a mindset of “I am the authority on this topic” from the doctor sounds likely.I would not be surprised if the doctor adopted a rule of “always discuss treatment in person” as health issues often are very emotional and patients may be ill-informed: Meeting in person is a plus for establishing trust between doctor and patient, which will be essential for handling such situations. This reason doesn’t really apply to the case presented by Zvi, but it seems reasonable that at least some motivation for the doctor’s behaviour comes from a sloppy application of this rule. It seems to me that the doctor (and nurse) dismissed the possibility that someone could actually have a reason for not visiting right now and then got stuck in their positions.If the doctor also doesn’t reflect on their role as doctor in a consequentialist way, for some situations they might value shown respect (“If your doctor says you should meet them now, you should meet them now”) more than the actual improvement in their patient’s lives.I wonder how the doctor would react if Zvi’s friend would point out his motivation for keeping his schedule while actively endorsing the importance of his doctor’s opinion. This should happen in person, as phone communication is (even) less good at correcting misinterpretations. If I am right, this could allow the doctor to be assured that their value of shown respect is safe. And possibly this lets the doctor be open to the point of Zvi’s friend.- - - Apart from this, I am quite distraught by the almost active distrust in their patient’s decisions on the side of this doctor and nurse. If this really is typical for the American medical system, there will be massive associated problems ..
[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]
My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds:
For describing my experience, I will regard the mind as consisting loosely of sub-agents, which are inter-connected and coordinating with each other (as in Global Workspace Theory). In healthy equilibrium, the agents are largely aligned and contribute to a single global agent. Properties of agents include ‘trust in their inputs’ and ‘alertness/willingness to update’.
Now to my description: For me, it felt as if part of my mind lost some of its input-connections from other parts, increasing its alertness (something fundamentally changed, thus predictions must be updated) and also crippling feedback from the ‘global opinion’. This caused drifting behaviour of the affected sub-agent, as it updated on messy/incomplete input, while not being successfully realigned by other sub-agents. After some time, the impaired sub-agent would either settle on a new, misinformed model (allowing its alertness to settle) or keep grasping for explanations (alertness staying high, maybe because more alert-type input from other agents remained).
The rest of my mind experienced a sub-agent panicking and then broadcasting eccentric opinions in good faith, while either not being impressed by contradictions or erratically updating to warped opinions loosely connected to input from the other agents. As the impaired agent felt as if it would update to contradictions (but didn’t), the source of the felt alertness (“something is very wrong”) was elusive and it became natural to just globally adjust to the sub-agent to restore coherence. Thus, internal coherence was partially restored at the cost of deviating from common sense (creating an Ugh Field in confrontations with contradicting experiences).
Should my experience be representative, the decision for accepting a delusional idea is not solely based on it being optimal for describing global sensory input. Instead one of the sub-agents does not properly update to global decisions, but still dominates them whenever active as all other agents do keep updating*. In this view the delusion is actually the best sensory input explanation, conditioned on the impaired sub-agent being right.
*) There should be some additional responses like generally decreasing the ‘trust in input’ or possibly recognizing the actual source of the problem. The latter would need confronting the Ugh Field, which should take a lot of effort
It seems, the text of point 6 got lost somehow, so I will cite it from the original post:
The fable of the rational vampire. (I wish I had a link to credit the author). The rational vampire casually goes through life rationalising away the symptoms – “I’m allergic to garlic”, “I just don’t like the sun”. “It’s impolite to go into someone’s home uninvited, I’d be mortified if I did that”. “I don’t take selfies” and on it goes. Constant rationalisation.
I really like the summarized addressing of the reasons. While reading, it felt as if the point of Many Maps, Lightly held gained momentum in some way. I think this helped me with aligning my ‘gut-feeling’ with my understanding.
I will try to focus on the “compose a satisfying, useful, compact, and true model of what questions are” aspect. To reduce the problem to something more manageable, I will regard the thought process while questioning and exclude social and linguistic aspects. In short: My model proposal:- While thinking, we use ‘frameworks’ (expectations/models/concepts/..)- When thinking inside of a framework, we are able to notice gaps and inconsistencies, which feels unnerving to confusing- This causes us to search for a solution (filling the gap, fixing the inconsistency, replacing the framework), which is the act of asking a question(- The nested, interacting, fuzzy and changing ‘frameworks’ make everything complicated.)
In long:Aiyen answered “It’s a noticed gap in your knowledge”, which I would like to build on: It seems to me that questions are only possible when there is some expectation/model/concept in my mind to find the gap in.As no better term comes to my mind I will use *framework* as the term for the expectation/model/concept that the question is stemming from. One can imagine ‘framework’ to refer to a mental picture of some part of reality.Now it seems to me that while thinking inside of a framework one can notice gaps or inconsistencies in the framework (this strongly reminds me of ‘Noticing Confusion’ from the Sequences), which feels unnerving (if clear) or confusing (if vague). The search for a fix to the gap of the framework would then be what we call asking a question.When doing this in a social setting, asking a question will tell others that help (in some sense) is being asked for and reveal something about the framework in use (which has many implications for social interaction).
- I think that the term ‘stupid question’ is usually used when one thinks that the asking person is using an unsuitable framework altogether. It doesn’t refer to the question itself but to the fact that ‘basic understanding’ (the ‘proper framework’) seems to be missing and thus answering the question would be pointless. Usefulness and Summary
Although this model of Questions seems quite compact and true to me, at this point it doesn’t help with moving from the “Unknown Unknown to Known Unknwon”. Pointing out that confusion plays a big role is already part of the Sequences.Apart from hiding everything complicated behind the term ‘framework’, the main aspect of my model is the claim that questions always, per definition, have their origin from ‘inside their box’ and are a quest for looking outside of it.
Our quest consists of the simplest operations, each one worthy of examination. We cannot build towers of thought without a solid foundation. We cannot build better tools if we don’t know how our current tools operate, and it’s often good to bootstrap by using our tools on themselves.
To improve our tools of thinking, a better understanding of questions and their behaviour surely is useful.In my usual way of thinking, the frameworks I am using in my mind are fuzzy and ever changing, which makes it hard to pin down and realize confusion. This problem can be approached by thoroughly and consciously choosing one’s framework of interest. One would expect this to take a lot of mental work/time, but in exchange be a more robust way to improve frameworks(This does sound a lot like the “System 2” way of thinking from Kahnemann’s “Thinking, Fast and Slow”).If it is true that finding gaps in a defined box (framework) is a natural ability of our mind (and the existence of a box a condition for this ability), this could open an approach for improving our tools.___Final note: Until now I only read about rationality and certainly do not feel confident in my ability to contribute without erring often. Please point out mistakes that I make or basic ideas that I am unaware of.