The wiki entry you linked is extremely unclear. Can you explain what enactivism is in simple words, using the vocabulary like http://splasho.com/upgoer5/ ?
If I’ve understood it correctly, it’s the idea that the way our mind works is severely constrained by our physical form. For example, one of my pet hypotheses is that, since we are bipeds that grow up vertically, we’re conditioned to think that more important things are in a vertically higher position than less important things (our language is littered with such metaphors: superior, inferior, exalted, debased, etc.). It shouldn’t be immediately obvious that things farther from the ground have greater value, but I’ve found it difficult to show to other people that vertical metaphors are metaphors, and that we’d use different ones if our bodies were different.
For example, one of my pet hypotheses is that, since we are bipeds that grow up vertically, we’re conditioned to think that more important things are in a vertically higher position than less important things
Does this matter, though? A question I have about the whole field of embodied cognition.
It keeps a check on our expectations for mutual understanding with alien species. A lot of our idioms and mental habits won’t have any meaning for them, and vice versa. This already happens between human cultures, but it will happen even more with species that don’t share our biologic history. Ultimately, it will compel us to reconsider how much of our thinking is generalizable, and how much is the contingent product of our evolution.
When I get the time surely. I find cognitive science by definition quite unclear, it seems far too young a discipline with many different goals and theories attaching themselves to the moniker Cognitive Science. From a personal perspective and from the formal education I’ve received the cognitivism which I think lesswrong/tranhumanists endorse make me very uneasy even though I’m a LW and TH.
Does anybody think that there is a cognitivist bias in LessWrong?
Bias is a bad word for core axioms that underly thinking. When discussing on Lesswrong I do accept certain axioms as the basis for the discourse.
There are other occasions where I talk to other people where I use other modes. When I attend a NLP seminar, it can happen that there are four meaningful conversational layers active at the same time. It’s highly narrated and things that are said mean thing based on the narrative and context in which they were said.
On of my first 1-on-1 conversation with an NLP trainer was an elevator ride. I drove to the 5th floor to go to the toilet on it. The elevator stopped on the 4th floor and he came in. The 4th floor was the floor in which the seminar was held After assessing the situation he said: “You’re intelligent.”
He was just at the toilet but walked down from the 5th floor to the 4th floor to then drive to the floor, and now I was on the 5th floor again because I drove the elevator there.
On that level the interaction is trivial, but to him I made the appearance of low self esteem nerd, so him as a figure of authority telling telling me me that I’m intelligent was something that was very targeted to what he thought I would need on an emotional level at that moment.
The style of the interaction where meaningful points usually don’t get made on the most obvious level of the conversation and depend on context is very different from the kind of intellectual discussion on Lesswrong.
I’m not really able to do both at the same time. Both approaches have there use but I don’t it makes much sense to speak in terms of bias. Just different frameworks and mental models with other axioms.
The result of such differences is that a lot of the academic literature on a subject such as hypnosis or NLP is bad because a good NLP trainer has the habit of communicating on a entirely different layer than an academic.
And to be clear, I do consider the NLP paradigm to be a form of enactivism.
I’m not familiar with enactivism in particular, but embodied and situated cognition seem like reasonable paradigms. I don’t think they really necessarily contradict computationalism or cognitivism, though.
Is anybody interested in enactivism? Does anybody think that there is a cognitivist bias in LessWrong?
The wiki entry you linked is extremely unclear. Can you explain what enactivism is in simple words, using the vocabulary like http://splasho.com/upgoer5/ ?
If I’ve understood it correctly, it’s the idea that the way our mind works is severely constrained by our physical form. For example, one of my pet hypotheses is that, since we are bipeds that grow up vertically, we’re conditioned to think that more important things are in a vertically higher position than less important things (our language is littered with such metaphors: superior, inferior, exalted, debased, etc.). It shouldn’t be immediately obvious that things farther from the ground have greater value, but I’ve found it difficult to show to other people that vertical metaphors are metaphors, and that we’d use different ones if our bodies were different.
You might be interested in Metaphors We Live By by Lakoff and Johnson. It explores cognitive metaphors like HAPPY IS UP, HEALTHY IS UP, etc.
Thank you.
Does this matter, though? A question I have about the whole field of embodied cognition.
It keeps a check on our expectations for mutual understanding with alien species. A lot of our idioms and mental habits won’t have any meaning for them, and vice versa. This already happens between human cultures, but it will happen even more with species that don’t share our biologic history. Ultimately, it will compel us to reconsider how much of our thinking is generalizable, and how much is the contingent product of our evolution.
When I get the time surely. I find cognitive science by definition quite unclear, it seems far too young a discipline with many different goals and theories attaching themselves to the moniker Cognitive Science. From a personal perspective and from the formal education I’ve received the cognitivism which I think lesswrong/tranhumanists endorse make me very uneasy even though I’m a LW and TH.
Bias is a bad word for core axioms that underly thinking. When discussing on Lesswrong I do accept certain axioms as the basis for the discourse.
There are other occasions where I talk to other people where I use other modes. When I attend a NLP seminar, it can happen that there are four meaningful conversational layers active at the same time. It’s highly narrated and things that are said mean thing based on the narrative and context in which they were said.
On of my first 1-on-1 conversation with an NLP trainer was an elevator ride. I drove to the 5th floor to go to the toilet on it. The elevator stopped on the 4th floor and he came in. The 4th floor was the floor in which the seminar was held After assessing the situation he said: “You’re intelligent.” He was just at the toilet but walked down from the 5th floor to the 4th floor to then drive to the floor, and now I was on the 5th floor again because I drove the elevator there.
On that level the interaction is trivial, but to him I made the appearance of low self esteem nerd, so him as a figure of authority telling telling me me that I’m intelligent was something that was very targeted to what he thought I would need on an emotional level at that moment.
The style of the interaction where meaningful points usually don’t get made on the most obvious level of the conversation and depend on context is very different from the kind of intellectual discussion on Lesswrong.
I’m not really able to do both at the same time. Both approaches have there use but I don’t it makes much sense to speak in terms of bias. Just different frameworks and mental models with other axioms.
The result of such differences is that a lot of the academic literature on a subject such as hypnosis or NLP is bad because a good NLP trainer has the habit of communicating on a entirely different layer than an academic.
And to be clear, I do consider the NLP paradigm to be a form of enactivism.
I’m not familiar with enactivism in particular, but embodied and situated cognition seem like reasonable paradigms. I don’t think they really necessarily contradict computationalism or cognitivism, though.
Mayhaps not indeed.