Escape Velocity from Bullshit Jobs

Link post

Without speculating here on how likely this is to happen, suppose that GPT-4 (or some other LLM or AI) speeds up, streamlines or improves quite a lot of things. What then?

The Dilemma

Samo and Ben’s dilemma: To the extent that the economy is dominated by make-work, automating it away won’t work because more make-work will be created, and any automated real work gets replaced by new make-work.

Consider homework assignments. ChatGPT lets students skip make-work. System responds by modifying conditions to force students to return to make-work. NYC schools banned ChatGPT.

Consider a bullshit office job. You send emails and make calls and take meetings and network to support inter-managerial struggles and fulfill paperwork requirements and perform class signaling to make clients and partners feel appreciated. You were hired in part to fill out someone’s budget. ChatGPT lets you compose your emails faster. They (who are they?) assign you to more in person meetings and have you make more phone calls and ramp up paperwork requirements.

The point of a bullshit job is to be a bullshit job.

There is a theory that states that if you automate away a bullshit job, it will be instantly replaced by something even more bizarre and inexplicable.

There is another theory that states this has already happened.

Automating a real job can even replace it with a bullshit job.

This argument applies beyond automation. It is a full Malthusian economic trap: Nothing can increase real productivity.

Bullshit eats all.

Eventually.

Two Models of the Growth of Bullshit

  1. Samo’s Law of Bullshit: Bullshit rapidly expands to fill the slack available.

  2. Law of Marginal Bullshit: There is consistent pressure in favor of marginally more bullshit. Resistance is inversely proportional to slack.

In both cases, the lack of slack eventually collapses the system.

In the second model, increased productivity buys time, and can do so indefinitely.

Notice how good economic growth feels to people. This is strong evidence for lags, and for the ability of growth and good times to outpace the problems.

A Theory of Escape Velocity

We escaped the original Malthusian trap with the Industrial Revolution, expanding capacity faster than the population could grow. A sufficient lead altered underlying conditions to the point where we should worry more about declining population than rising population in most places.

Consider the same scenario for a potential AI Revolution via GPT-4.

Presume GPT-4 allows partial or complete automation of a large percentage of existing bullshit jobs. What happens?

My model says this depends on the speed of adaptation.

Shoveling Bullshit

Can improvements outpace the bullshit growth rate?

A gradual change over decades likely gets eaten up by gradual ramping up of requirements and regulations. A change that happens overnight is more interesting.

How fast can bullshit requirements adapt?

The nightmare is ‘instantaneously.’ A famous disputed claim is that the NRC defined a ‘safe’ nuclear power plant as one no cheaper than alternative plants. Cheaper meant you could afford to Do More Safety. Advancements are useless.

Most regulatory rules are not like that. Suppose the IRS requires 100 pages of paperwork per employee. This used to take 10 hours. Now with GPT-4, as a thought experiment, let’s say it takes 1 hour.

The long run result might be 500 pages of more complicated paperwork that takes 10 hours even with GPT-4, while accomplishing nothing. That still will take time. It is not so easy or fast to come up with 400 more pages. I’d assume that would take at least a decade. It likely would need to wait until widespread adaptation of AI powered tools, or it would bury those without them.

Meanwhile, GPT-5 comes out. Gains compound. It seems highly plausible this can outpace the paperwork monster.

This applies generally to places where a specified technical requirement, or paperwork, is needed. Or in places where otherwise the task is well-specified and graded on a pass/​fail basis. Yes, the bar can and will be raised. No, if AI delivers the goods in full, Power’s requirements can’t and won’t be raised high enough or fast enough to keep pace.

Replacing Bullshit

If the bullshit and make-work needs to keep pace, it has options.

  1. Ban or regulate the AI, or use of the AI.

  2. Find different bullshit requirements that the AI can’t automate.

  3. Impose relative bullshit requirements, as in the nuclear power case.

Option 1 does not seem promising. Overall AI access likely can’t be policed.

Option 3 works in some situations and not others, as considered below.

Option 2 seems promising. Would likely be in person. Phones won’t work.

New in person face time bullshit tasks could replace old bullshit tasks. This ensures bullshit is performed, bullshit jobs are maintained, costly signals are measured and intentionally imposed frictions are preserved.

I expect this would increasingly be the primary way we impose relative bullshit requirements. When there is a relative requirement, things can’t improve. Making positional goods generally more efficient does not work.

Same goes for intentional cost impositions. Costs imposed in person are much harder to pay via AI.

Thus, such costs move more directly towards pure deadweight losses.

When things are not competitive, intentional or positional, I would not expect requirements to ramp up quickly enough to keep pace. Where this is attempted, the gap between the bullshit crippled versions and the freed versions will be very large. Legal coercion would be required, and might not work. If escape is achieved briefly, it will be hard to put that genie back in the bottle.

One tactic will be to restrict use of AI to duplicate work to those licensed to do the work. This will be partly effective at slowing down such work, but the work of professionals will still accelerate, shrinking the pool of such professionals will be a slow process at best, and it is hard to restrict people doing the job for themselves where AI enables that.

Practicalities, plausibility and the story behind requirements all matter. Saying ‘humans prefer interacting with humans’ is not good enough, as callers to tech support know well. Only elite service and competition can pull off these levels of inefficiency.

Passing Versus Winning

It will get easier to pass a class or task the AI can help automate, unless the barrier for passing can be raised via introducing newly required bullshit in ways that stick.

Notice that the main thing you do to pass in school is to show up and watch your life end one minute at a time. Expect more of that, in more places.

It won’t get easier to be head of the class. To be the best.

Harvard is going to take the same number of students. If the ability of applicants to look good is supercharged for everyone, what happens? Some aspects are screened off, making others more important, requiring more red queen’s races. Other aspects have standards that go way up to compensate for the new technology.

Does this make students invest more or less time in the whole process? If returns to time in some places declines, less time gets invested in those places. Then there are clear time sinks that would remain, like putting in more volunteer hours, to eat up any slack. My guess is no large change.

What about a local university? What if the concern is ‘are you good enough?’ not ‘are you the best?’ If it now takes less human time to get close enough to the best application one can offer, this could indeed be highly welfare improving. The expectations and requirements for students will rise, but not enough to keep pace.

Attending the local university could get worse. If what cannot be faked is physical time in the classroom, such requirements will become increasingly obnoxious, and increasingly verified.

The same applies to bullshit jobs. For those stuck with such jobs, ninety percent of life might again be showing up. By making much remote work too easy, it risks ceasing to do its real task.

Speed Premium

It comes down to: If they do happen, can the shifts described above happen fast enough, before they are seen as absurd, the alternative models become too fully developed and acclimated to to be shut down and growth becomes self-sustaining?

If this all happens at the speed its advocates claim, then the answer is clearly yes.

Do I believe it? I mostly want to keep that question outside scope, but my core answer so far, based on my own experiences and models, is no. I am deeply skeptical of those claims, especially for the speed thereof. Nostalgebraist’s post here illustrates a lot of the problems. Also see this thread.

Still, I can’t rule out thing developing fast enough. We shall see.