Along the same lines as TurnTrout, I was wondering about the abstraction versus specific situation. I am not asking that any share anything they would not be comfortable with. However, I do think abstraction from oneself in the analysis can just be another one of the protection mechanisms that can be used to allow us to appear to be making progress while while still avoiding the underlying truth driving our behaviors.
That said, I think Sara offers some very good items to consider.
Okay, this next bit is not directly related but seems implicit in the posting, and other posts I’ve read here. Does the LW community tend to see the human mind and “person” as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?
Okay, this next bit is not directly related but seems implicit in the posting, and other posts I’ve read here. Does the LW community tend to see the human mind and “person” as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?
There are some LWers who think that way, and others who don’t. (Among the people who find it a useful model, AFAICT it’s usually treated more as a hypothesis to consider and/or fake-framework that is sometimes useful. This sequence is a fairly comprehensive introduction)
Thinking in terms of internal parts is a mental model that a good portion of the LW community that’s interested in self-improvements techniques can use. You need it for the Internal Double Crux technique that CFAR teaches.
Yet, it’s not the only model out there. I personally rather do a version of Leverage’s belief reporting that assumes I as a whole either hold a belief or don’t then doing parts work if I believe that a specific belief that I can identify is the issue.
As far as abstraction goes, I think it’s a key feature for self-introspection. If you are mentally entangled with the part that you are introspecting you won’t see it clearly.
A lot of meditation is about reaching a mental state where you can look at your thoughts without being associated with them.
Does the LW community tend to see the human mind and “person” as a collection of entities/personalities/agents/thinking processes?
In the extreme form you asked about:
I’m skeptical/unclear on how it would be empirically tested.
Not aiming for IIT, but seeing how it could be true:
Rather than holding all the relevant information in our minds at once, the benefit of taking time to think is not necessarily explicit focus/consideration, but unconscious/thinking about other things and then making connections, because it takes time to reload the mental workspace in order to visit all relevant areas.
Along the same lines as TurnTrout, I was wondering about the abstraction versus specific situation. I am not asking that any share anything they would not be comfortable with. However, I do think abstraction from oneself in the analysis can just be another one of the protection mechanisms that can be used to allow us to appear to be making progress while while still avoiding the underlying truth driving our behaviors.
That said, I think Sara offers some very good items to consider.
Okay, this next bit is not directly related but seems implicit in the posting, and other posts I’ve read here. Does the LW community tend to see the human mind and “person” as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?
There are some LWers who think that way, and others who don’t. (Among the people who find it a useful model, AFAICT it’s usually treated more as a hypothesis to consider and/or fake-framework that is sometimes useful. This sequence is a fairly comprehensive introduction)
There are also plenty of LWers who don’t buy it.
Thinking in terms of internal parts is a mental model that a good portion of the LW community that’s interested in self-improvements techniques can use. You need it for the Internal Double Crux technique that CFAR teaches.
Yet, it’s not the only model out there. I personally rather do a version of Leverage’s belief reporting that assumes I as a whole either hold a belief or don’t then doing parts work if I believe that a specific belief that I can identify is the issue.
As far as abstraction goes, I think it’s a key feature for self-introspection. If you are mentally entangled with the part that you are introspecting you won’t see it clearly.
A lot of meditation is about reaching a mental state where you can look at your thoughts without being associated with them.
In the extreme form you asked about:
I’m skeptical/unclear on how it would be empirically tested.
Not aiming for IIT, but seeing how it could be true:
Rather than holding all the relevant information in our minds at once, the benefit of taking time to think is not necessarily explicit focus/consideration, but unconscious/thinking about other things and then making connections, because it takes time to reload the mental workspace in order to visit all relevant areas.
EDITED: for brevity and clarity.