There must be some fundamental difference between how one draws inferences from mental states versus everything else.
Talking about “drawing inferences from mental states” strikes me as a case of the homunculus fallacy, i.e., thinking that there’s some kind of homunculus sitting inside our brains looking at the mental states and drawing inferences. Whereas in reality mental states are inferences.
Really? I don’t see that at all. The same mental state can be both an inference and a premise for the next inference. For example, “I feel really tired lately → Maybe I’m sick” seems pretty straightforward, as does “I am a guy and feel really attracted to other guys → maybe I’m gay”.
You’re thinking of the inference as “I don’t feel affection when I see her face → She’s not my wife”. Whereas, another way to think about it is “Her face looks like [insert description of wife’s face here] → She’s not my wife”.
This objection points largely in the right direction but I don’t think it’s fair to accuse the view of adopting the homunculus fallacy. After all, the very suggestion is that our brains have circuitry that (in effect) performs Bayesian updating and that neurological damage and psychiatric conditions can cause this circuitry to misbehave. This is a way the brain could have worked. If the view adopted the homunculus fallacy then the Bayesian updating machinery couldn’t, itself, be broken. It could only recieve bad input.
However, as I delineate in my comment, we have every reason to believe the brain doesn’t have anything like a Bayesian updating module exercising control over all the other brain modules. Instead, the empirical evidence suggests a much simpler structure in which different brain regions vie to control our actions without any arbitration by some master Bayesian updating module. Otherwise, one couldn’t explain our inclination to answer wrongly on tests that pit one part of the brain against another, e.g., our mistakes in identifying the color of text spelling the name of another color.
Also, to be pedantic the mental states aren’t inferences. .The mental states merely determine behavior patterns that we can (sometimes) usefully describe as making certain inferences.
Module A doesn’t “draw an inference” from the state of module B, that would require module A to have a sub-module dedicated to drawing inferences from module B and evaluating their reliability. Module A simply treats the output of module B as an inference of similar weight to the one it itself makes.
But one or more drawing-inferences-from-states-of-other-modules module could certainly exist, without invoking any separate homunculus. Whether they do and, if so, whether they are organized in a way that is relevant here are empirical questions that I lack the data to address.
Talking about “drawing inferences from mental states” strikes me as a case of the homunculus fallacy, i.e., thinking that there’s some kind of homunculus sitting inside our brains looking at the mental states and drawing inferences. Whereas in reality mental states are inferences.
Really? I don’t see that at all. The same mental state can be both an inference and a premise for the next inference. For example, “I feel really tired lately → Maybe I’m sick” seems pretty straightforward, as does “I am a guy and feel really attracted to other guys → maybe I’m gay”.
You’re thinking of the inference as “I don’t feel affection when I see her face → She’s not my wife”. Whereas, another way to think about it is “Her face looks like [insert description of wife’s face here] → She’s not my wife”.
This objection points largely in the right direction but I don’t think it’s fair to accuse the view of adopting the homunculus fallacy. After all, the very suggestion is that our brains have circuitry that (in effect) performs Bayesian updating and that neurological damage and psychiatric conditions can cause this circuitry to misbehave. This is a way the brain could have worked. If the view adopted the homunculus fallacy then the Bayesian updating machinery couldn’t, itself, be broken. It could only recieve bad input.
However, as I delineate in my comment, we have every reason to believe the brain doesn’t have anything like a Bayesian updating module exercising control over all the other brain modules. Instead, the empirical evidence suggests a much simpler structure in which different brain regions vie to control our actions without any arbitration by some master Bayesian updating module. Otherwise, one couldn’t explain our inclination to answer wrongly on tests that pit one part of the brain against another, e.g., our mistakes in identifying the color of text spelling the name of another color.
Also, to be pedantic the mental states aren’t inferences. .The mental states merely determine behavior patterns that we can (sometimes) usefully describe as making certain inferences.
You can have a module in a certain state and another module which draws an inference from that. No homunculus needed.
Module A doesn’t “draw an inference” from the state of module B, that would require module A to have a sub-module dedicated to drawing inferences from module B and evaluating their reliability. Module A simply treats the output of module B as an inference of similar weight to the one it itself makes.
But one or more drawing-inferences-from-states-of-other-modules module could certainly exist, without invoking any separate homunculus. Whether they do and, if so, whether they are organized in a way that is relevant here are empirical questions that I lack the data to address.