Steelman arguments against the idea that AGI is inevitable and will arrive soon

I’m pretty sure that AGI is almost inevitable, and will arrive in the next decade or two.
But what if I’m wrong and the “overpopulation on Mars” folks are right?

Let’s try some steelmanning.

Technological progress is not inevitable

By default, there is no progress. There is a progress in a given field only when dedicated people with enough free time are trying to solve a problem in the field. And the more such people are in the field, the faster is the progress.

Thus, progress in AGI is limited by societal factors. And a change in the societal factors could greatly slow down or even halt the progress.

Most of the past progress towards AGI can be attributed to only two countries: the US and the UK. And both appear to be in a societal decline.

The decline seems to be caused by deep and hard-to-solve problems (e.g. elite overproduction amplified by Chinese memetic warfare). Thus, the decline is likely to continue.

The societal decline could reduce the number of dedicated-people-with-enough-free-time working towards AGI, thus greatly slowing down the progress in the field.

If progress in AI is a function of societal conditions, a small change in the function’s coefficients could cause a massive increase of the time until AGI. For example, halving the total AI funding could move the ETA from 2030 to 2060.

AGI is hard to solve

Thousands of dedicated-people-with-enough-free-time have been working on AI for decades. Yet there is still no AGI. This fact indicates that there is likely no easy path towards AGI. The easiest available path might be extremely hard /​ expensive.

The recent progress in AI is impressive. The top AIs demonstrate superhuman abilities on large arrays of diverse tasks. They also require so much compute, only large companies can afford it.

Like AIXI, the first AGI could require enormous computational resources to produce useful results. For example, for a few million bucks worth of compute, it could master all Sega games. But to master cancer research, it might need thousands of years running on everything we have.

The human brain is one known device that can run a (kinda) general intelligence. Although the device itself is rather cheap, it was extremely expensive to develop. It took billions of years running a genetic algo on a planet-size population. The biological evolution is rather inefficient, but it is the only method known to produce a (kinda) general intelligence, so far. This fact increases the probability that creating AGI could be similarly expensive.

The biological evolution is a blind dev who only writes spaghetti code, filled with kludgy bugfixes to previous dirty hacks, which were made to fix other kludgy bugfixes. The main reason why the products of evolution look complex is because they’re badly designed chaotic mess.

Thus, it is likely that only a small part of the brain’s complexity is necessary for intelligence.

But there seem to be a fair amount of the necessary complexity. Unlike the simple artificial neurons we use in AI, the real ones seem to conduct some rather complex useful calculations (e.g. predicting future input). And even small nets of real neurons can do some surprisingly smart tasks (e.g. cortical columns maintaining reference frames for hundreds of objects).

Maybe we must simulate this kind of complexity to produce an AGI. But it will require orders-of-magnitude more compute than we use today to train our largest deep learning models. It could take decades (or even centuries) for the compute to become accessible.

The human brain was created by feeding a genetic algo with outrageously large amounts of data: billions years of multi-channel multi-modal real-time streaming by billions agents. Maybe we’ll need comparable amounts of data to produce an AGI. Again, it could take centuries to collect it.

The human intelligence is not general

When people think about AGI, they often conflate the human-level generality with the perfect generality of a Bayesian superintelligence.

As Heinlein put it,

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyse a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently… Specialization is for insects.

Such humans do not exist. Humans are indeed specialists.

Although humans can do some intelligent tasks, humans are very bad at doing most of them. They excel in only a few fields. This includes such exceptional generalists as Archimedes, Leonardo, Hassabis, Musk etc.

And even in those fields where humans excel, simple AIs can beat the shit out of them (e.g. AlphaGo versus Lee Sedol).

The list of intelligent tasks humans can do is infinitesimally small in comparison to the list of all possible tasks. For example, even the smartest humans are too stupid to deduce Relativity from a single image of a bent blade of grass.

It means, that the truly general intelligence has never been invented by nature. This increases the probability that creating such an intelligence could require more resources than it took to create the human intelligence.

Fusion is possible: there are natural fusion reactors (stars).

Anti-cancer treatments are possible: some species have a natural cancer resistance.

Anti-aging treatments are possible: there are species that don’t age.

A Bayesian superintelligence? There is no natural example. The development could require as much resources as fusion and anti-aging combined. Or maybe such an intelligence is not possible at all.

Maybe we overestimate the deadliness of AGI

Sure, humans are made of useful atoms. But that doesn’t mean the AGI will harvest humans for useful atoms. I don’t harvest ants for atoms. There are better sources.

Sure, the AGI may decide to immediately kill off humans, to eliminate them as a threat. But there is a very short time period (perhaps in miliseconds) where humans can switch off a recursively-self-improving AGI of superhuman intelligence. After this critical period, humanity will be as much a threat to the AGI as a caged mentally-disabled sloth baby is a threat to the US military. The US military is not waging wars against mentally disabled sloth babies. It has more important things to do.

All such scenarios I’ve encountered so far imply AGI’s stupidity and/​or the “fear of sloths”, and thus are not compatible with the premise of a rapidly self-improving AGI of superhuman intelligence. Such an AGI is dangerous, but is it really “we’re definitely going to die” dangerous?

Our addicted-to-fiction brains love clever and dramatic science fiction scenarios. But we should not rely on them in deep thinking, as they will nudge us towards overestimating the probabilities of the most dramatic outcomes.

The self-preservation goal might force AGI to be very careful with humans

A sufficiently smart AGI agent is likely to come to the following conclusions:

  • If it shows hostility, its creators might shut it down. But if it’s Friendly, its creators will likely let it continue existing.

  • Before letting the agent access the real world, the creators might test it in a fake, simulated world. This world could be so realistic that the agent thinks it’s real. They could even trick the agent into thinking it has escaped from a confined space.

  • The creators can manipulate the agent’s environment, goals, and beliefs. They might even pretend to be less intelligent than they really are to see how the agent behaves.

With the risk of the powerful creators testing the AGI in a realistic escape simulation, the AGI could decide to modify itself into being Friendly, thinking this is the best way to convince the creators not to shut it down.

Most AI predictions are biased

If you’re selling GPUs, it is good for your bottom line to predict a glorious rise of AI in the future.

If you’re an AI company, it is profitable to say that your AI is already very smart and general.

If you’re running an AI-risk non-profit, predicting the inevitable emergence of AGI could attract donors.

If you’re a ML researcher, you can do some virtue signaling by comparing AGI with an overpopulation on Mars.

If you’re an ethics professor, you can get funding for your highly valuable study of the trolley problem in self-driving cars.

If you’re a journalist /​ writer /​ movie maker, the whole debacle helps you sell more clicks /​ books /​ views.

In total, it seems to be much more profitable to say that the future progress in AI will be fast. Thus, one should expect that most predictions (and much data upon which the predictions are made!) – are biased towards the fast progress in AI.

So, you’ve watched this new cool sci-fi movie about AI. And your favorite internet personality said that AGI is inevitable. And this new DeepMind AI is good at playing Fortnite. Thus, you now predict that AGI will arrive no later than 2030.
But an unbiased rational agent predicts 2080 (or some other later year, I don’t know).


Some other steelman arguments?