Looking carefully at how other people speak and write, there are certain cognitive concepts, that simply don’t make sense to me, the way that it “makes sense” to me that red is “hot” and blue is “cold”. I wonder if the reason I can’t understand them is that my consciousness is somehow “simpler”, or less “layered”, although I don’t really know what I mean by that, it’s just a vibe i get. Here are some examples:
I don’t understand the concept of “internal monologue”.
I don’t understand the difference between “imagine”, “picture”, “think of”, “think about”, “consider”, etc.
I don’t understand the concept of “trust your gut/heart.”, in a decision. If I am presented with 2 decisions, I do not percieve one of them as labeled as “gut/heart says ‘do this’ ”. Sometimes I percieve one of the options as “jumps to mind instinctively as the Thing People Do In This Situation” or “the kind of thing a book protagonist would do” instead, but I don’t know if either of those things are what people mean by trusting your gut/heart. Maybe they just mean to think about it.
I don’t understand the concept of “choose to believe X”. I believe stuff because I have a certain sensory experience. I cannot consciously change the probabilities my brain assigns to things. Maybe people just mean “act as if you believe X” but just have belief-in-belief type confusions.
I have never received a satisfactory explaination for the difference between “annoyed” and “frusturated”, even though people do tend to insist there is a difference and they’re not just synonyms.
I experience unusually strong placebo and nocebo effects (at least compared to others around me).
I feel a bit confused. I generally sort of just feel (and I don’t know what exactly I mean by this) that there is less “structure” separating the “me” part of the Program The Computer-That-Is-My-Brain Is Running from the “not-me” part.
I don’t understand the concept of “internal monologue”.
I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they’re imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That’s internal monologue.
There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.
(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that’d be evidence against this hypothesis.)
This tracks for me. I was explicitly taught, as a small child, to be ready to explain what I was doing at all times. Failure to have a ready and satisfactory answer to “what are you doing?” was treated as strong evidence that I was idle (or up to no good!) and should be redirected to do something explicable instead.
(And today, if a friend asks me “how are you?” as a sincere question rather than a casual politeness, it sometimes locks up my cognition for a few seconds as I scramble to introspect enough to come up with a good answer...)
Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won’t directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)
choose to believe is, in my experience, about situations where what is true depends on what you believe because your actions affect what is true and thus your beliefs affect your motivation to action which affects what you can believe. “I choose to believe that humanity is good” is a common one, which if a lot of people choose, will in fact be more true. “I choose to believe I can handle this problem” is another, where again, the truth in question is (partially) downstream of your beliefs. see also https://www.lesswrong.com/posts/8dbimB7EJXuYxmteW/fixdt
Even in situations where my beliefs affect my action those beliefs are not choices. If I notice that if I had a certain belief I would act in a way that would give me more utility, well then that observation becomes instead my motivation to act as if i have that belief.
“act as if you hold a belief” and “hold a belief for justified reasons” aren’t the same thing, the latter seems to me to produce higher quality actions if the belief is true. eg:
believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND they don’t care about you now, AND you don’t care about them, AND (you will act as if they care about you now ⇒ you will act as if you care about them) ]
vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you care about them ⇒ they care about you)]
vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you don’t care about each other ever]
in the first one your innate trust falls on they-care-about-you will be less reliable, your caring for them will have little hints anywhere you aren’t a good enough actor, etc. if neither of you pick up on it, the first can emulate the second, and thereby produce a world where the second becomes a past-justified true belief. but if you’re able to instead reliably make the second a *future-*justified true belief, then you can avoid the first collapsing into the third. (I’m eliding some details about what the uncertainty looks like between these beliefs and what it looks like if you’re both uncertain about which of the three statements is true, which makes it a lot more complicated if you’re particularly uncertain).
if you have multiple conflicting beliefs you can hold for justified reasons (because what is true comes after the picking between the beliefs), then there can be situations where it’s objectively the case that one can choose the beliefs first, and thereby choose actions. maybe your thoughts aren’t organized this way! but this is what it seems to me to mean when someone who is being careful to only believe things in proportion to how likely they are (aka “rational”) still gets to say the phrase “I choose to believe”. I also think that people who say “I choose to believe” in situations where it’s objectively irrational (because their beliefs can’t affect what is true) are doing so based on incorrectly expecting their beliefs to affect reality, eg “I choose to believe in a god who created the universe” cannot affect whether that’s true but is often taken to. (“I choose to believe in a god that emerges from our shared participation in a religion” is entirely rational, that’s just an egregore/memeplex/character fandom).
Looking carefully at how other people speak and write, there are certain cognitive concepts, that simply don’t make sense to me, the way that it “makes sense” to me that red is “hot” and blue is “cold”. I wonder if the reason I can’t understand them is that my consciousness is somehow “simpler”, or less “layered”, although I don’t really know what I mean by that, it’s just a vibe i get. Here are some examples:
I don’t understand the concept of “internal monologue”.
I don’t understand the difference between “imagine”, “picture”, “think of”, “think about”, “consider”, etc.
I don’t understand the concept of “trust your gut/heart.”, in a decision. If I am presented with 2 decisions, I do not percieve one of them as labeled as “gut/heart says ‘do this’ ”. Sometimes I percieve one of the options as “jumps to mind instinctively as the Thing People Do In This Situation” or “the kind of thing a book protagonist would do” instead, but I don’t know if either of those things are what people mean by trusting your gut/heart. Maybe they just mean to think about it.
I don’t understand the concept of “choose to believe X”. I believe stuff because I have a certain sensory experience. I cannot consciously change the probabilities my brain assigns to things. Maybe people just mean “act as if you believe X” but just have belief-in-belief type confusions.
I have never received a satisfactory explaination for the difference between “annoyed” and “frusturated”, even though people do tend to insist there is a difference and they’re not just synonyms.
I experience unusually strong placebo and nocebo effects (at least compared to others around me).
I feel a bit confused. I generally sort of just feel (and I don’t know what exactly I mean by this) that there is less “structure” separating the “me” part of the Program The Computer-That-Is-My-Brain Is Running from the “not-me” part.
Does any of this make sense?
Frustrated is usually when you keep trying to do something, it’s not working, and you are annoyed about that, and you want to give up.
Annoyed is more general, like “this thing peeves me, I want this to stop gah”
So “frusturated” is what we call “annoyed” when it comes from the source of “repeatedly failing to do something”?
Yeah pretty much. Frustration is maybe also a stronger valence.
I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they’re imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That’s internal monologue.
There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.
(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that’d be evidence against this hypothesis.)
This tracks for me. I was explicitly taught, as a small child, to be ready to explain what I was doing at all times. Failure to have a ready and satisfactory answer to “what are you doing?” was treated as strong evidence that I was idle (or up to no good!) and should be redirected to do something explicable instead.
(And today, if a friend asks me “how are you?” as a sincere question rather than a casual politeness, it sometimes locks up my cognition for a few seconds as I scramble to introspect enough to come up with a good answer...)
Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won’t directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)
choose to believe is, in my experience, about situations where what is true depends on what you believe because your actions affect what is true and thus your beliefs affect your motivation to action which affects what you can believe. “I choose to believe that humanity is good” is a common one, which if a lot of people choose, will in fact be more true. “I choose to believe I can handle this problem” is another, where again, the truth in question is (partially) downstream of your beliefs. see also https://www.lesswrong.com/posts/8dbimB7EJXuYxmteW/fixdt
Even in situations where my beliefs affect my action those beliefs are not choices. If I notice that if I had a certain belief I would act in a way that would give me more utility, well then that observation becomes instead my motivation to act as if i have that belief.
“act as if you hold a belief” and “hold a belief for justified reasons” aren’t the same thing, the latter seems to me to produce higher quality actions if the belief is true. eg:
believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND they don’t care about you now, AND you don’t care about them, AND (you will act as if they care about you now ⇒ you will act as if you care about them) ]
vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you care about them ⇒ they care about you)]
vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you don’t care about each other ever]
in the first one your innate trust falls on they-care-about-you will be less reliable, your caring for them will have little hints anywhere you aren’t a good enough actor, etc. if neither of you pick up on it, the first can emulate the second, and thereby produce a world where the second becomes a past-justified true belief. but if you’re able to instead reliably make the second a *future-*justified true belief, then you can avoid the first collapsing into the third. (I’m eliding some details about what the uncertainty looks like between these beliefs and what it looks like if you’re both uncertain about which of the three statements is true, which makes it a lot more complicated if you’re particularly uncertain).
if you have multiple conflicting beliefs you can hold for justified reasons (because what is true comes after the picking between the beliefs), then there can be situations where it’s objectively the case that one can choose the beliefs first, and thereby choose actions. maybe your thoughts aren’t organized this way! but this is what it seems to me to mean when someone who is being careful to only believe things in proportion to how likely they are (aka “rational”) still gets to say the phrase “I choose to believe”. I also think that people who say “I choose to believe” in situations where it’s objectively irrational (because their beliefs can’t affect what is true) are doing so based on incorrectly expecting their beliefs to affect reality, eg “I choose to believe in a god who created the universe” cannot affect whether that’s true but is often taken to. (“I choose to believe in a god that emerges from our shared participation in a religion” is entirely rational, that’s just an egregore/memeplex/character fandom).