This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don’t fit in… The “rational actor that sifts through [the modules]” could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn’t perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.
However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it’s still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.
It’s too costly to be a perfect deceiver, so evolution hacked together a system where if it’s consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.
In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you’re pushed toward status and social advantage, not truth. Maybe there aren’t even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.
This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don’t fit in… The “rational actor that sifts through [the modules]” could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn’t perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.
However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it’s still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.
It’s too costly to be a perfect deceiver, so evolution hacked together a system where if it’s consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.
In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you’re pushed toward status and social advantage, not truth. Maybe there aren’t even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.