Neurological reality of human thought and decision making; implications for rationalism.

The human brain is a massively parallel system. The best such system can do for doing anything efficiently and quickly is to have different small portions of brain compute and submit their partial answers and progressively reduce, combine and cherrypick—a process of which we seem to have almost no direct awareness of, and can only conjecture indirectly as it is the only way thought can possibly work on such slow clocked (~100..200hz) such extremely parallel hardware which uses up a good fraction of body’s nutrient supply.

Yet it is immensely difficult for us to think in terms of parallel processes. We have very little access to how the parallel processing works in our heads, and we have very limited ability of considering a parallel process in parallel in our heads. We are only aware of some serial-looking self model within ourselves—a model that we can most easily consider—and we misperceive this model as self; believing ourselves to be self aware when we are only aware of that model which we equated to self.

People aren’t, for the most part, discussing how to structure the parallel processing for maximum efficiency or rationality, and applying that to their lives. It’s mostly the serial processes that are being discussed. The necessary, inescapable reality of how mind works is entirely sealed from us, and we are not directly aware of it, nor are we discussing and sharing how that works. Whatever little is available, we are not trained to think in those terms—the culture trains us to think in terms of serial, semantic process that would utter things like “I think, therefore I am”.

This is in a way depressing to realize.

But at same time this realization brings hope—there may be a lot of low hanging fruit left if the approach has not been very well considered. I personally have been trying to think of myself as of parallel system with some agreement mechanism for a long while now. It does seem to be a more realistic way to think of oneself, in terms of understanding why you make mistakes and how they can be improved, but at same time as with any complex approach where you ‘explain’ existing phenomena there’s a risk of being able to ‘explain’ anything while understanding nothing.

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; literature is rife with references to “a part of me wanted”, and perhaps we should all take this as much more than allegory. Perhaps the way you work when you decide to do or not do something, is really best thought of as a disagreement of multiple systems with some arbitration mechanism forcing default action; perhaps training—the drill-response kind of training, not simply informing oneself—could allow to make much better choices in the real time, to arrive at choices rationally rather than via some sort of tug of war between regions that propose different answers and the one that sends the strongest signal winning the control.

Of course that needs to be done very cautiously as in the complex and hard to think topics in general its easy to slip towards fuzzy logic where each logical step contains a small fallacy which leads to rapid divergence to the point that you can prove or explain anything. The Freudian style id/​ego/​superego as simple explanation for literally everything which predicts nothing is not what we want.