Against intelligence

Link post

Two years ago I wrote some pragmatic arguments that “human-like AI” is hard to develop and would be fairly useless. My focus was on the difficulty of defining a metric for evaluation and the cost-effectiveness of human brains.

But I think I failed to stress another fundamental point, which is that “intelligence” as commonly thought of may not be critical to acquiring knowledge about or power to change the reality external to our own body.

I – Conceptually intelligent

If I’m allowed to psychoanalyse just a tiny bit, the kind of people that think a lot about “AI” are the kind of people that value conceptual intelligence too much, because that’s the best part of their thinking.

A rough definition of conceptual is: the kind of thinking that can easily be put into symbols, such as words, code or math. I’m making this distinction because there are many networks in the brain that accomplish unimaginably complex (intelligent) tasks, which are very nonconceptual. Most people that think about “AI” don’t view these functions as “worth imitating” (rightfully so in most cases).

There’s a ton of processing dedicated to going from “photovoltaic activation of retinal neurons” to “conscious experience of seeing”. But we use digital cameras to record the world for computer vision, so we don’t have to imitate most of the brain processes involved in sight.

I should make a note here that everything in the brain is interconnected. We don’t see a world of light and shadow, we see a world of objects. Conceptual thinking gets involved in the process of seeing at some point, detects various patterns, and overlays concepts over them to focus and enrich perception. Similarly, hearing, smell and memory all play a role in “seeing”.

But I think it’s permissible to think of an abstract “conceptual thinking” complex in the brain, which “AI” is trying to replicate, and treat the other bits, such as senses, motor control or homeostasis maintenance as separate from this, a prerequisite to run the platform that allows for conceptual thinking. I realize this is a naive dualist view that kind of puts conceptual thinking on a pedestal, but it’s useful for conveying some ideas.

That being said, it’s important to note that conceptual thinking in humans does have access to the rest of the brain. Language can query the limbic system and say something like “I am feeling sad”. This is a very narrow abstraction that doesn’t encompass the changes in neuronal firing over billions of synapses which constitutes “being sad”. But I don’t have to convey all of that to another person, since they themselves know what sadness feels like.

Similarly, conceptual thinking can be used to control the nonconceptual parts of the brain, I can’t “think away pain” but I can think “Oh, this is my leg muscles hurting because I just did some deadlifts” and the pain will probably feel “less bad” or “Oh, this is my leg muscle hurting because I have cancer” and the pain will probably feel “worse”, even if the signals coming from the leg muscles in both cases are identical.

II – Thinking reality away

Moving onwards, it’s important to remember that our understanding of the world is often limited by processes that can’t be speed up with conceptual thinking.

Take the contrived examples of physicists running experiments at the LHC. There’s a finite amount of experiments you can run in a day, a finite amount of harddisks on which you can store a finite amount of data, the gathering of which is limited by a finite amount of sensors connected to your computers by a finite amount of wiring.

You can build more LHCs, more harddisks and more sensors, but those require access to the finite amount of rare metals and assembly plants required to build them, which in turn require raw materials from a finite amount of sources… etc.

The experiment might reveal something on a scale from “very useful to very useless” and this will dictate the direction of following experiments. Iterate over this process thousands of time and you’ve got a build-up of scientific knowledge that allows for magnetic toroids to contain fusion reactions and the building of artery repairing nanobots, or whatever. All the advancements also lead to improvements down the supply chain, harddisks becoming cheaper, metal easier to mine and refine, assembly plants faster build… etc.

Part of this process can be improved with “better conceptual thinking”, i.e intelligent people, or superintelligent “AIs” can help it along. Part of this process is limited by reality itself. You can’t smelt widgets made of infinitely fast because it takes time to transport the ingredients, melt and mix them, cast them into shapes, wait for them to cool down… etc. The limitations placed upon processes by reality can be lessened by understanding reality better, but that is an iterative experimental process that is time-constrained due to our current limitations when it comes to manipulating reality.

In other words, you can have a perfect intelligence analyzing the data and optimizing the supply chains and doing every other form of “thinking” required to advance physics. But the next advancement in our theory of special-low-temperature-weak-nuclear-forces-in-niobium (or whatever) might boil down to “we have to wait 4 days for the next shipment of niobium to arrive from the DRC, and if the experiments using that don’t yield any relevant data to advance the theory we need to wait until next month to get more funding to buy more niobium from the DRC”.

I know that some of you might consider me daft for spending 6 paragraphs to essentially say “perfect thinking doesn’t solve all problems”, but I think that this is a point that sinks in really hard for some people, in part due to an education system that entrenches the idea of “thinking” as the sole driver of the world, and sometimes leads to a fallacy of “if only we could think better every problem would be instantly resolved”.

That aside, the question remains of whether or not solving all “thinking bottlenecks” would leave us with a process of scientific advancement that is somewhat faster than what we have today (slow road to progress) or exponentially faster (singularity).

I suspect that our reality falls under the former scenario, where more intelligent agents will somewhat speed up things, but not enough to yield a “singularity”, i.e exponential scientific advancements that keep compounding.

I base this suspicion in part on the following imperfect narratives:

1. There are much better “thinking tools” which science could be using in many areas. Computational biology only started using machine learning a few years ago, although the techniques used existed for a long time (e.g. decades in the case of the “novel” algorithms used by Horvath for his epigenetic clock, and 8 to 4 years for the algorithms now used to determine in-Vitro protein folding). More importantly, the vast majority of the hiring budget at companies profiting from scientific research is not spent on people that can build and use “thinking tools”, but rather on lab workers, salespeople, marketing, trial funding, etc. The creme de la creme in terms of programming, mathematics, statistics and machine learning work predominantly at tech companies in the advertising and logistics spaces, rather than at companies creating novel drugs, materials, engines and so on.

2. The historical advancement of science seems to indicate that “more intelligence” was not the answer. In the first half of the 20th-century scientific theory was constructed by a very small sample from the niche demographics of “relatively rich European or American men”… consisting of maybe a few hundred thousand candidates. Nowadays the doors are open to virtually everyone with the correct aptitudes, increasing our pool of potential scientists to half a dozen billion people. A 10,000 fold increase in the number of people. Even if 90% of the “potential scientists” lack the early-childhood needed to develop in such a direction, even if discrimination leads to “relatively rich European and American men” being 90% as likely to be hired despite their merit, we are still left with a 100 fold increase. Add to that the fact that there are hundreds of times more scientists operating nowadays… and, well, a physicist from 1950 would be forgiven for thinking that the current conditions would have lead to a “singularity” of sorts in physics due to this exponential increase in intelligence. Yet the last 70 years have bought a relative stagnation. The only reprieve from stagnation seems to be due to large experimental setups (see: expensive and take a long time to build). This seems to indicate that “adding intelligence” was not the answer, it was enough to have 100 bohem thinkers pondering the nature of space-time to come up with general relativity, increasing that number to 100,000 and having the top ones be much smarter did not yield “general relativity but 100 times better”.

3. Human intelligence has been stumbling upon the same scientific ideas since Aristotel and Lucretius. The structure of the brain has remained relatively similar in that time. The “thing” that lead to modern scientific advancement seems to be much more related to humanity crossing a barrier in terms of resource exploitation and numbers that make wars very suboptimal to trade and allows most resources to be diverted to things other than subsistence and war. The key to getting here doesn’t seem to be intelligence-related, otherwise one would expect the industrial revolution to have arisen in ancient Rome or Athens, not in the intellectually policed and poorly educated medieval England and Prussia.

4. The vast majority of “good thinkers” (under an IQ/​math/​language/​memory == intelligence paradigm) are funnelled towards intern companies, no extra requirements, not even a diploma, if you have enough “raw intelligence”. Under the EMH that would indicate those companies have the most need for them. Yet internet companies are essentially devoid of any practical implications when it comes to reality, they aren’t always engaged in “zero-sum” games, but they are still “competitive”, in that their ultimate reason is to convince people they want/​need more things and that those things are more valuable, they aren’t “creating” any tangible things. On the other hand, research universities and companies interested in exploring the real world seem to care much less about intelligence and much more about .

iii—Entrenched power

The other idea worth addressing is that “intelligence” could help one achieve power over reality by achieving power in society. For some people social power is a terminal goal, others would use it to divert societal resources from yachts and watches and war towards more productive goals, such as understanding the natural world.

But power seems to be very unrelated to intelligence.

The king might have 10 sons, and the bravest and smartest of his ten sons will be chosen to inherit the throne. But the king has 100,000,000 subjects, among which there are over 1,000,000 braver and smarter than his bravest and smartest son… yet they don’t get selected.

This is a naive example of the“global vs local” fallacy being made here. People see intelligent people achieving positions of power and think “ah, intelligence is the key”, when in fact the key is “ intelligence + being part of a very small selection pool”, and being part of that selection pool might be the much more important trait.

This entrenched power bias might be more or less present in various places. The extreme examples being places like India with its cast systems, or Italy, where the same families have controlled most wealth for hundreds or thousands of years. The more “meritocratic” places being areas like the US, where most powerful people come from a rather diverse poll, and where generational wealth is rare and means “having a rich grandparent” rather than “being a direct descendant of one of the families in the republican Roman senate”.

So maybe a superintelligence is born and it can be a 20% better trader than any man on Wallstreet, but that in itself is pretty irrelevant, because the best trader will be one working for a bank like Goldman Sachs, that has influence over the regulators, insider information, control over the exchanges and billions for traders to play with… and while “being a good trader” is a selection criterion for working with a Wallstreet bank, so is “being the kind of street-smart Brooklin guy that our CEO likes”.

Still, you might say, a superintelligent agent could “mimic” the behaviours which get people to think of them as “part of their tribe”, part of the small selection pool they want to hand over power to. I would beg to differ.

iv – Simulating humans

The broader point that the idea of entrenched power is part of, is that in order to operate in the world you need to understand people, to know them, befriend them, network with them, get them to like you, etc.

People have a very large advantage in understanding other people, besides their physical actions and appearance, which are rather hard to mimick down to the level of pheromonal changes and subtle facial muscle twitches.

We have an incredibly complex machine that can be used to simulate other people, our brain. This goes back to the idea of our conceptual thinking having the ability to communicate with the rest of our thinking. When asking “What can I do to impress Joe”, some concepts like “I should wear a blue tie” come to mind, but the reasoning behind those concepts is abstract and involves the ability to empathise with someone like Joe, to literally try and think like Joe.

Given that we still can’t run a very naive simulation of a worm with a few dozen neurons, this might indicate that simulating a human brain on a computer could be impossible. Not in the “impossible to comprehend” sense, but the number of experiments required to build the simulation and the resources required for it to run would be prohibitive.

On the other hand, humans are able to very accurately simulate other people by sheer similarity, requiring just a few seconds and a few milligrams of glucose to do so astoundingly well.

See someone prick their finger with a needle, and your finger becomes more sensitive, you “feel” their pain, the same areas of the brain activate in both of you (at the very rough level, what can be seen with an fMRI). Have a friend that you’re in constant contact with (e.g. a spouse) and you’ll be able to predict their words better than a phone keyboard app that has an exact history of their last 10 years of writings. The apparatus humans use to “understand” other human is not just a complex probabilistic function based on observing them, but rather it’s an immensely complex simulation which we adjust based on our observations, a simulation that we might never be able to efficiently run on a computer.

So one of the key traits required to operate in society, understanding people, might lie outside the reach of any computer-based system for rather pragmatic reasons, not because “computers don’t have souls” or any bullshit like that, but purely because the only environment that can efficiently and accurately simulate people, is found in other people.

v – Back to intelligence

Overall this whole article is my attempt to counter this partially unspoken view that many “believers” in the efficiency of “AI” have, a view that holds conceptual intelligence as the key to unlocking most problems.

On the whole, I don’t believe I can disprove this view, but I think that taking it for granted is often the result of the sort of conditioning that people get through education, rather than of any rational thought on the subject.

It seems very probable that the limits to us having power over nature or influencing society often resides in processes that are either impossible to significantly influence, where only gradual improvements can be made. And the areas where reality is not the limiting factor have other humans as the limiting factor, and the human brain might be, almost by definition, the best tool for coordinating other humans.

Granted, I still think that “superintelligence”, be it the whimsical kind envisioned by Bostrom, or a more sober approach to the subject, is probably economically unfeasible and very difficult to engineer, and I point back to my first article on the subject for the reasons there.

But even if we created a machine that was as intelligent as the brightest person, but able to “think” at a rate thousands of times as fast, we might still only get limited improvements in a surprisingly small niche of problems.

While years of social conditioning tell me otherwise, I remain rather unconvinced that intelligence is incredibly useful for, well, anything. With small and pointless exceptions such as chess, GO or writing articles about the pointlessness of intelligence.

I hope you will forgive me for being rather direct in this article, I am aware that I can’t take a stance here with anything but minute confidence. Heck, the “issue” itself is rather blurry and revolves around vague ideas such as intelligence, progress, knowledge, power and good.

This is the kind of topic where a data-backed answer is impossible to give, it’s nuanced and each of the thousand subproblems that would clarify it might only be solved with experiments that we could never run, due to limitation in time, resources (and potentially for ethical reasons).

But if you do have a strong belief about the supremacy of intelligence, I would invite you to question that belief further, and see where that belief might come from, look at the observations anchoring it to reality.