This text smells like pretty much emotional rationalization (in the psychological sense) of a certain biased point of view.
Actually, I’m not an enemy of narrow questions, and in the same way, I’m not an enemy of the plurality of meanings. The focused, narrow, formal approach is of great power indeed, but it is also restricted and new theories are being constructed again and again—outside of a narrow framework and back to some new one.
Consider a man who just learned to drink from a certain brown glass. Then, he sees a steel mug. They are quite different objects, with different properties, different names, and meanings attached in different linguistic contexts. If he can not grasp that what is common, he won’t be able to generalize the knowledge at all.
But somehow this trivial observation (consequences of which play a role on every layer of abstraction in thinking) tends to be forgotten when dozens of layers of abstraction are being created, the definition starts to battle the actual meaning until the latter is completely lost and one starts to rationalize upon those layers of abstractions while common sense whispers: “It’s damn meaningless, it doesn’t help to understand anything”. What happens then? Then comes the time to go back to the connected uncertain world.
There is more. A natural language contains a vast plurality of word meanings which actually help to look at things from different angles and to learn such commonalities by reading words in different contexts. If you will defy such a reality of natural language and human thinking you would risk becoming isolated in bubbles of extremely precise meanings that not understood by anyone other except Chosen Ones. It is already hard to extract meaning (you can read “ideas”) from books with narrative formalized too much. So to make full use of people’s knowledge It might be not useful to be biased towards narrowness which can disconnect people’s knowledge and prevent understanding.
So to me personally it’s not a virtue to put a narrow approach on a pedestal. Whatever thought trick humanity came up with and while it’s working well—it’s rational to me to use in the right situation. But you still can go deeper and be precise as much as you want if it proves to be worthy in (how ironically) a precise way.
I agree that a lower-level model doesn’t mean more relevant. Also, I think that reductionism is a tool that can be relevant in certain contexts, as any other tool. In some others, it may not.
The latter may seem a bit rude, and I apologize. But it actually aims at what I see as the problem here: you started a human communication, then you restricted it from the start, and then generalized your observations while not being able to grasp anything about liking a beach at least. Looks like rationalization to soften your failure to understand another human.
I guess you are aware of what is called emotional intelligence. Every piece of experience we have can start a plethora of processes in our mind, many of them can be understood not through reduction and plain axiomatized logic, but through emotional attuning to your interlocutor, using some intuition and “fuzzy thinking”. Then you would be able to understand what really means for your mom through a bunch of metaphors, stories, and reflection upon it. And you could do some accurate predictions about your mom! From the start, it was a matter of communication and sharing experience, right? Humanity mastered that art, and it obviously not restricted to a reductionist way of thinking.
To me, it seems not really rational to omit a wide spectrum of human knowledge from different areas of life just because “you want to believe”. You could have thousands of more points of view that are more relevant to address the phenomena of liking a beach. How can you possibly make an inference about reductionism is still can be applied to replicate “the beach experience” for your mom without even being exposed to any other point of view out of the reductionist’s worldview box? There is plenty of knowledge on different levels of system description and you are not even trying to connect your ideas with existing knowledge.
What we do know is that there is no computationally possible way to simulate large ensembles of elementary particles. Now, if one day there will be known that we can not fight combinatorial explosion efficiently by growing computational power (and it looks like just that), how practical will be this fully “theoretically working reductionism”?
Ultimately, your reasoning looks great just because it is reduced by itself. For example, you reduced your possibilities to understand “why mom likes a beach” from many accessible ways of doing so (emotional intelligence, deep conversation, reflection, study of psychology) to just down the reductionist way, and then you concluded that it was a matter of the wrong application of the reductionism, and NOT a matter of lacking knowledge on the subject of communication.
Okay, now I feel like I understand your main point better.
I think I have just another point of view on the example. My point is that the example itself seems a bit artificial.
The human brain still is not conquered by reductionist modeling. So we don’t know yet if there is a possibility to reduce consciousness (we are talking about the feeling of joy, that means about brain and consciousness) to some systems and parts without losing crucial properties of consciousness. At any level.
Bearing that in mind, the example seems a bit meaningless in both cases:
The case where you reduced consciousness by modeling on an adequate level
The case where you reduced consciousness by modeling too low-level
I think it’s too theoretical to say what you actually did wrong if you don’t know how to do the same thing right (that is if you tried to apply reductionism in the very same conversation but in the right way).
With that said, it looks to me like you could just omit the example from life and all the context as poor evidence, leaving the statement about low-level modeling alone.