[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]
My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds:
For describing my experience, I will regard the mind as consisting loosely of sub-agents, which are inter-connected and coordinating with each other (as in Global Workspace Theory). In healthy equilibrium, the agents are largely aligned and contribute to a single global agent. Properties of agents include ‘trust in their inputs’ and ‘alertness/willingness to update’.
Now to my description: For me, it felt as if part of my mind lost some of its input-connections from other parts, increasing its alertness (something fundamentally changed, thus predictions must be updated) and also crippling feedback from the ‘global opinion’. This caused drifting behaviour of the affected sub-agent, as it updated on messy/incomplete input, while not being successfully realigned by other sub-agents. After some time, the impaired sub-agent would either settle on a new, misinformed model (allowing its alertness to settle) or keep grasping for explanations (alertness staying high, maybe because more alert-type input from other agents remained).
The rest of my mind experienced a sub-agent panicking and then broadcasting eccentric opinions in good faith, while either not being impressed by contradictions or erratically updating to warped opinions loosely connected to input from the other agents. As the impaired agent felt as if it would update to contradictions (but didn’t), the source of the felt alertness (“something is very wrong”) was elusive and it became natural to just globally adjust to the sub-agent to restore coherence. Thus, internal coherence was partially restored at the cost of deviating from common sense (creating an Ugh Field in confrontations with contradicting experiences).
Should my experience be representative, the decision for accepting a delusional idea is not solely based on it being optimal for describing global sensory input. Instead one of the sub-agents does not properly update to global decisions, but still dominates them whenever active as all other agents do keep updating*. In this view the delusion is actually the best sensory input explanation, conditioned on the impaired sub-agent being right.
*) There should be some additional responses like generally decreasing the ‘trust in input’ or possibly recognizing the actual source of the problem. The latter would need confronting the Ugh Field, which should take a lot of effort
It seems, the text of point 6 got lost somehow, so I will cite it from the original post:
I really like the summarized addressing of the reasons. While reading, it felt as if the point of Many Maps, Lightly held gained momentum in some way. I think this helped me with aligning my ‘gut-feeling’ with my understanding.