This section seems like a thought experiment about how one might want to have ‘non-self-ratifying’* beliefs.
*Beliefs which do not ratify themselves.
This means that, according to his own beliefs, his optimism is untrustworthy.
Not trustworthy. Untrustworthy means we have some reason to believe they are incorrect. Trustworthy means we have some reason to believe they are correct. Randomness has neither property. (But is the gene random?)
However, a logical implication of Beth’s beliefs is that she herself is a stimulus-response machine, and incapable of reasoning about world-representations. Thus, she cannot consistently believe that her specification of stimulus-response machines is likely to be an accurate, logically coherent representation of humans. Her belief-set, then, fails to self-ratify, on the basis that it assigns to herself a level of cognitive power insufficient to come to know that her belief-set is true [with hight probability].
The conclusion does not necessarily follow from the premises. Even if it did, one researcher’s modus ponens, may be another’s modus tollens.** (For a more direct counter argument, consider the Chinese room argument. Even if humans can’t reason about world-representations (and it’s not clear what that means), that doesn’t mean they can’t run an algorithm which can, even if the humans don’t understand the algorithm.)
There is a possible misinterpretation of self-ratification that says: “You cannot assume a belief to be true in the course of refuting it; the assumption would then fail to self-ratify”.
Related:
“You cannot assume a belief to be true after it has been refuted.”
“You cannot assume a belief is true merely because you have shown it is not false.”
“You cannot assume something is true merely because you have shown it to be so, because you are fallible.”
Since this applies to past behavior as well, a further implication is that the things he reads in other blogs are, for the most part, false.
Unless the blogs he reads are filtered in some fashion. (Perhaps he has some discernment, or cleverly reads blogs only for the first X posts/years, while bloggers are honest, or by some other means detects their treacherous turn that mirrors his own.)
Suppose Phyllis believes that the physical world exists, but that minds don’t exist. That is, there are not entities that are capable of observation, thought, etc. (This is a rather simple, naive formulation of eliminative materialism)
Every time this post mentions a hypothetical about how humans/human minds work, I wonder ‘what does this mean?’ How can this belief be refuted as ‘un-ratified’ (which seems a form of logical consistency, what other forms are there?), when it isn’t clear what it means?
there are not entities that are capable of observation
From HPMOR:
When you walked through a park, the immersive world that surrounded you was something that existed inside your own brain as a pattern of neurons firing. The sensation of a bright blue sky wasn’t something high above you, it was something in your visual cortex, and your visual cortex was in the back of your brain. All the sensations of that bright world were really happening in that quiet cave of bone you called your skull, the place where you lived and never, ever left. If you really wanted to say hello to someone, to the actual person, you wouldn’t shake their hand, you’d knock gently on their skull and say “How are you doing in there?” That was what people were, that was where they really lived. And the picture of the park that you thought you were walking through was something that was visualized inside your brain as it processed the signals sent down from your eyes and retina.
Not trustworthy. Untrustworthy means we have some reason to believe they are incorrect. Trustworthy means we have some reason to believe they are correct. Randomness has neither property. (But is the gene random?)
If I tell you the sun is 1.6 * 10^43 kg, but I also tell you I generated the number using a random number generator (not calibrated to the sun), that is an untrustworthy estimate. The RNG wouldn’t be expected to get the right answer by accident.
A stopped clock is right twice a day, but only that often, so it’s untrustworthy.
The conclusion does not necessarily follow from the premises.
Huh? If Beth’s brain can’t reason about the world then she can’t know that humans are stimulus-response engines. (I’m not concerned with where in her brain the reasoning happens, just that it happens in her brain somewhere)
“The conclusion does not necessarily follow from the premises.”
Huh? If Beth’s brain can’t reason about the world then she can’t know that humans are stimulus-response engines. (I’m not concerned with where in her brain the reasoning happens, just that it happens in her brain somewhere)
I was going to object a bit to this example, too, but since you’re already engaged with it here I’ll jump in.
I think reasoning about these theories as saying humans are “just” stimulus-response engines strawmans some of these theories. I feel similarly about the mental nonrealism example. In both cases there are better versions of these theories that aren’t so easily shown as non-self-ratifying, although I realize you wanted versions here for illustrative purposes. Just a complication to the context of mentioning classes of theories where only the “worst” version of serves as an example, thus is likely to raise objections that fail to notice the isolation to only the worst version.
This section seems like a thought experiment about how one might want to have ‘non-self-ratifying’* beliefs.
*Beliefs which do not ratify themselves.
Not trustworthy. Untrustworthy means we have some reason to believe they are incorrect. Trustworthy means we have some reason to believe they are correct. Randomness has neither property. (But is the gene random?)
The conclusion does not necessarily follow from the premises. Even if it did, one researcher’s modus ponens, may be another’s modus tollens.** (For a more direct counter argument, consider the Chinese room argument. Even if humans can’t reason about world-representations (and it’s not clear what that means), that doesn’t mean they can’t run an algorithm which can, even if the humans don’t understand the algorithm.)
Related:
“You cannot assume a belief to be true after it has been refuted.”
“You cannot assume a belief is true merely because you have shown it is not false.”
“You cannot assume something is true merely because you have shown it to be so, because you are fallible.”
Unless the blogs he reads are filtered in some fashion. (Perhaps he has some discernment, or cleverly reads blogs only for the first X posts/years, while bloggers are honest, or by some other means detects their treacherous turn that mirrors his own.)
Every time this post mentions a hypothetical about how humans/human minds work, I wonder ‘what does this mean?’ How can this belief be refuted as ‘un-ratified’ (which seems a form of logical consistency, what other forms are there?), when it isn’t clear what it means?
From HPMOR:
When you walked through a park, the immersive world that surrounded you was something that existed inside your own brain as a pattern of neurons firing. The sensation of a bright blue sky wasn’t something high above you, it was something in your visual cortex, and your visual cortex was in the back of your brain. All the sensations of that bright world were really happening in that quiet cave of bone you called your skull, the place where you lived and never, ever left. If you really wanted to say hello to someone, to the actual person, you wouldn’t shake their hand, you’d knock gently on their skull and say “How are you doing in there?” That was what people were, that was where they really lived. And the picture of the park that you thought you were walking through was something that was visualized inside your brain as it processed the signals sent down from your eyes and retina.
**Perhaps Beth isn’t human.
If I tell you the sun is 1.6 * 10^43 kg, but I also tell you I generated the number using a random number generator (not calibrated to the sun), that is an untrustworthy estimate. The RNG wouldn’t be expected to get the right answer by accident.
A stopped clock is right twice a day, but only that often, so it’s untrustworthy.
Huh? If Beth’s brain can’t reason about the world then she can’t know that humans are stimulus-response engines. (I’m not concerned with where in her brain the reasoning happens, just that it happens in her brain somewhere)
I was going to object a bit to this example, too, but since you’re already engaged with it here I’ll jump in.
I think reasoning about these theories as saying humans are “just” stimulus-response engines strawmans some of these theories. I feel similarly about the mental nonrealism example. In both cases there are better versions of these theories that aren’t so easily shown as non-self-ratifying, although I realize you wanted versions here for illustrative purposes. Just a complication to the context of mentioning classes of theories where only the “worst” version of serves as an example, thus is likely to raise objections that fail to notice the isolation to only the worst version.