Words make us Dumb #1: The “Point”lessness of Knowledge
*** The top critical comment will receive $100 USD in BTC after 3 days from this post. Mods: if this reward should be held in escrow, please DM your info ***
This post is the first in a series that examines how the use of human languages for communication of knowledge results in suboptimal thinking. The initial set of topics of this series are listed below:
Words make us Dumb #1: The “Point”lessness of Knowledge
Words make us Dumb #2: You Can’t Fix Misinformation with More Words
Words make us Dumb #3: Whatabout Whataboutism?
Words make us Dumb #4: Coupling of Politics and Decoupling of Consequences
Words make us Dumb #5: Dooming AI Doomerism
Words make us Dumb #6: Algorithm of Life
Words make us Dumb #7: Trust vs. god of corruption
Words make us Dumb #8: In BlockChain we Trust
Words make us Dumb #9: From Belief Soup to Structure. Stop Arguing and Start Aggregating
Words make us Dumb #10: Alien Salvation
Question 1) Isn’t “Words make us dumb” just dumb sensationalist click-bait?
The title of this series is the first exhibit of why “Words make us Dumb” and was selected for it’s click-baityness.
Let’s reflect on the general nature of “Thinking” and “Knowledge Communication”. The purpose of this post is to transfer (aka route) knowledge from my mind to your mind in order to improve your thinking. To achieve this, the post must “puncture” your mental filter by being “Pointy”. This dumb title uses words that are intentionally sensationalist to cause a large enough impression to puncture your mental filter.
Given the space of all titles that on how Human Languages cause suboptimal thinking, “Words make us dumb” is the optimal title due to its induced impression and that impression’s ability to puncture your mental filter.
Claim 1: “Words make you dumb” is the optimal title due to the suboptimal nature of human thinking when processing human languages.
Question 2) Aren’t mental filters necessary?
With Human Languages mental filters are absolutely necessary. Even in an ideal world with no misinformation, the sheer amount of knowledge cannot be processed by a human mind. This is due to:
1) Human Finite Attention Span: I.e. a finite amount of ideas that it can contain and process at once. Every additional word has a cost in terms of time, energy and mental capacity.
2) Impression Based Reasoning: all minds use impressions as a mental shortcut to help evaluate ideas. Since relevance of ideas is a difficult problem, impressions end up being used a shortcut to assess relevance based on the amount of the impression.
Claim 2: Human Languages require filters to handle the sheer amount of knowledge due to finite attention span and the cost to process knowledge that human languages impose.
Question 3) What would have happened if I did not click this post?
This is the second exhibit of why “Words make us Dumb”. If you had not clicked the link, my current attempt at knowledge transfer would have failed. Since I highly value this knowledge, I would need to post it again(and again).
But even if you had clicked it, there are 1000 other things in your life that demand your finite attention and the significance of this post will fade. Therefore this post will have to be retransmitted again .
So whether you click it or not, this knowledge will need to be retransmitted and this results in a significant epistemic inefficiency when communicating in human languages.
Another source of inefficiency, is the transmission of false beliefs in the form of misinformation. It saturates the communication medium resulting in less knowledge being properly routed.
Claim 3: Words lack permanence. The words themselves do not fade, but the allocation of attention span in the minds of the thinkers does fade if it ever got there in the first place. This necessitates the retransmission of knowledge resulting in a significant inefficiency communicating knowledge in human languages.
Question 4) Why is knowledge “Point”less?
I apologize for the pun, but there are 2 meanings behind pointless: 1) “Point”less i.e. lacking a sharp point and 2) “Pointeless” i.e. irrelevant. As an example, assume you are climate Scientist with valuable Climate Knowledge trying to communicate that knowledge( but example this also works with health experts, environmental experts, economists, etc...). Since your knowledge cannot sufficiently puncture society’s mental filter to become part of their thinking, it is deemed “Point”less (first definition). Also all your valuable knowledge is irrelevant (second definition) if society does not incorporate it into their thinking to achieve the optimal action.
Claim 4: Knowledge is “Point”less (cannot puncture mental filters) and “Pointless” (irrelevant) since it is not part of society thinking.
Question 5) How does “Words make us dumb” cause “Point”lessness of knowledge?
Human Languages are fundamentally a tool to communicate knowledge from one mind to the other.
When a mind uses a Human Language to encode knowledge, they commit to the epistemic knowledge routing mechanisms available to Human Languages.
Even if you had a perfect mind where knowledge encoded in words could be processed perfectly( which it can’t but that will be discussed later), Human Languages will always result in Suboptimal Thinking since the available epistemic knowledge routing mechanisms are unable to route all knowledge perfectly to every mind.
Claim 5: From a “Point”less perspective, “Words make us dumb” since knowledge cannot be routed perfectly to every mind.
Question 6) Is there any knowledge that can easily puncture society’s mental filters?
Instead of being a climate scientist, assume you are an astronomer who just discovered a life threatening asteroid about to hit earth. This threat will kill millions, displace billions and destroy trillions of dollars. Because a asteroid produces a sizable impression, it will easily puncture society’s filters(aka going “viral”) and be part of their thinking. But Climate Scientists who warn of Climate Change killing millions, displacing billions and destroying trillions are not successful since Climate Change does not produce the same Impressions as a asteroid. While an asteroid strike has an impression corresponding with it relevance, many other aspects of life produce impressions that do not match their relevance.
Claim 6: The mismatch in impressions compared to relevance results in knowledge that cannot puncture the mental filter which in turn results in suboptimal thinking.
Question 7: If the reader was interested in that topic, would knowledge still be pointless?
No for the first definition(“Point”less) since it would easily puncture your mental filter but yes for the second definitions(“Pointless”) since it is still irrelevant. As an example, assume as a reader you are interested in Climate Change and you encountered an article regarding a town devastated by a Hurricane which Climate Scientists claim was more severe than usual due to Climate Change. How would you use this knowledge? You already knew climate change was happening. You live in an area not at risk for hurricanes. You don’t have actionable knowledge on how to improve the situation. Survey your favorite news source and ask yourself how much of this information is redundant and cannot be used to improve your situation. It is designed to overwhelm you with impressions but not provide TRUSTED ACTIONABLE SIMPLE ways to improve your life and your the world.
Claim 7: Knowledge is “Pointless” even for interest readers
Question 8: Whatabout knowledge routing in a suboptimal society?
Everything discussed above assumes an ideal epistemic environment in our society. But our society is far from ideal. Many corporations and governments conduct Epistemic Intimidation where knowledge producers face repercussions from dissemination of unfavorable knowledge in an effort to reduce it’s dissemination. Epistemic Intimidation is not the same as censorship, although censorship can be considered the worst form of the epistemic intimidation. Many instances of knowledge are not banned by society outright but their dissemination results in loss of research grants, job opportunities or social status in the form of cultural system points.
Even if human knowledge consumers had perfect minds with an infinite attention span(which they don’t), their thinking would be suboptimal since knowledge producers may be ineffective in routing knowledge to them.
Claim 8: In suboptimal reality(aka our current existence), Epistemic Intimidation can cause Epistemic Cowardism which results in knowledge not being properly routed to the designated consumers.
Bonus Question : Won’t AI fix the “Point”lessness of knowledge?
From a knowledge routing perspective, AI will face the following issues:
1) While AI may have a higher Attention Span than Humans, it is still finite. It will suffer from the NON-ZERO costs to process knowledge in Human Language form. Therefore it will still only process a subset of all knowledge.
2) AI will still have to communicate with humans using human languages. Therefore there is a limit to how much it can communicate. Additionally, the AI only fetches information that the human requests. For example, if the human never asks about Climate Change, then the AI may never present them with that knowledge, even though it affects all of us.
3) AI is still vulnerable to Epistemic Intimidation. If the organization training the LLM decides to exclude a subset of knowledge, that information will not be incorporated.
Bonus Claim: Words make AI dumb.
What if we could communicate in an Alien Language that made knowledge infinitely “Point”full by perfectly and effortlessly routing all knowledge to every mind. The first benefit is I wouldn’t need a dumb title like “Words make us dumb” to route my knowledge to your mind. But you may claim perfect routing is “Pointless” since Human Finite Attention Span prevents processing of infinite knowledge.
What if all knowledge in the Universe spoken in the Alien Language could be processed with ZERO effort. You would claim that learning new Languages(especially Alien) is difficult and time-consuming.
What if understanding this Alien Language was as easy as downloading an App on your phone. The distributors of knowledge would still have to learn the Alien Language to ensure their knowledge was
What if the Alien Language solved the problem of Epistemic Cowardism. Knowledge Distributors could maintain absolute anonymity while still achieving the highest levels of Trust through robust Epistemic Corruption Defense Mechanisms.
What if the use of the Alien Language results in optimal thinking.
Let’s see if your post has successfully overcome my mental filters (at the very least, I clicked). Here’s my reformulation of your claims, as if I had to explain them to someone else.
You need a special effort to grab the attention of humans
Humans can’t process all the words thrown at them and select “impressive” content
You need several tries to transmit knowledge properly
Beyond being impressive, words need to be “relevant” to transmit knowledge efficiently
Words can’t create a perfectly impressive and relevant content
Being very impressive doesn’t guarantee relevance
Content impressive for you doesn’t make it more relevant for you
This is a toy model, humans also have incentives to shape which content gets thrown or not
Now that I’ve written the points above, I study again the “what if” part at the end and say, “oh, so the idea is that human language may not be the best way to transmit knowledge because what gets your attention often isn’t what lets you learn easily, cool, then what”
Then… you claim that there might be a Better Language to cut through these issues. That would be extremely impressive. But then I scroll back up and I see the titles of the following posts. I’m afraid that you will only describe issues with human communication without suggesting techniques to overcome them (at least in specific contexts).
For instance, you gave an example comparison in impression (asteroid vs. climate change). Could you provide a comparison for relevance? Something that, by your lights, gets processed easily?
You won the reward for critisizing my article. Please DM me your bitcoin information.
Thank you for taking the time to review my post and comment.
So my core issue ( which resulted in the “Words make us Dumb” Series) is that you cannot just provide a document describing the Alien Language and expect people to appreciate its significance. People must be aware of the significant problems Human Languages cause in terms of Knowledge Routing, Politics, Trust, Misinformation, AI, Climate, etc… Once people are aware then they would be more receptive to the Alien Language.
The last 3 posts describe the Alien Language and how it would solve all the issues described above. But to appreciate the Alien Language, I was hoping to have a discussion about all these issues.
I am not sure I understand your question. Can you elaborate more?
Asteroids and Climate Change both cause Millions of Deaths, Billions of Displaced and Trillions of damage. But our psychology would result in significant action for Asteroids but not Climate Change since the former would cause a significant impression.
But instead of using Human Languages, if we used a framework to objectively assess the significance of every event(Asteroid, Climate Change, AI, Car Accidents, Diseases, etc...) then the impact of impressions would be nullified.
Just leaving this here on the off chance this comment counts and I can claim the bitcoin.
This comes across as total nonsense.
I appreciate your response. You have to pick a claim and write something critical against it to count for the reward. Can you elaborate which claim is “total nonsense”?