However, the phlogiston example doesn’t show how this algorithm is improperly implemented in humans. To show this, you need an example of incorrect beliefs drawn from a correct model, i.e. good input to the algorithm resulting in bad output. The phlogiston model was clearly incorrect. As other commenters have pointed out, contemporary scientists were painfully aware of this, and have eventually abandoned the model. Bad output from bad input doesn’t demonstrate a bug in implementation, certainly not the specific bug you mentioned:
we don’t keep rigorously separate books for the backward-message and forward-message
Such a defect would probably not even allow a mouse to be as intelligent as it is.
This is a great layperson explanation of the belief propagation algorithm.
However, the phlogiston example doesn’t show how this algorithm is improperly implemented in humans. To show this, you need an example of incorrect beliefs drawn from a correct model, i.e. good input to the algorithm resulting in bad output. The phlogiston model was clearly incorrect. As other commenters have pointed out, contemporary scientists were painfully aware of this, and have eventually abandoned the model. Bad output from bad input doesn’t demonstrate a bug in implementation, certainly not the specific bug you mentioned:
Such a defect would probably not even allow a mouse to be as intelligent as it is.