I wasn’t able to put up a post last Wednesday because I was at the Engineering Leadership Conference here in San Francisco. The big theme was, of course, AI. Easily 90% of the presentations and 100% of the conversations touched on AI in some way. Here are some impressions I came away with:
Everyone is scrambling. No one is confident. The people who act confident are bullshitting.
10-20% of teams are heavily using AI and living in the present. 80-90% are falling behind.
The big struggle is to even start using AI coding assistant tools. Lots of teams just don’t use them at all, or use them in very limited ways. People leading these teams know they are going to lose if they don’t change but are struggling to get their orgs to let them.
Even fewer teams have figured out how to use LLMs to build compelling product. Everyone is starting to figure out that LLMs are not a magic bullet, and real product and engineering work is needed to create something that customers will love.
The teams that seem furthest ahead in terms of tooling are using Claude Code, Cursor, and building internal chatbots for surfacing hidden team knowledge.
The teams that seem furthest ahead in terms of product are building targeted LLM features that automate labor-intensive services work. Think features like automating research, data entry, data normalization, report generation, and other work that is hard to deterministically automate but requires limited human judgement to get 80-90% of the way there.
The best teams also keep humans in the loop because LLMs aren’t deterministic and aren’t perfect. They know there’s value in that last 10-20% from a human, and they find ways to leverage AI and people together to make services cheap, fast, and good.
No one is sure what the future holds. Almost everyone I talked to acts vaguely like we’re either near the top of an S-curve or are going to see linear improvements in AI capabilities, even if they say they expect exponential growth.
Final assessment: engineering leadership across the industry is struggling to respond to AI quickly. If you use AI tools at all you’re ahead of the game. If you’re building usable AI features you’re further ahead. And if we see continued exponential growth, it’s going to hit teams like a ton of bricks.
80-90% are falling behind what exactly, please? to not want to decrease your productivity by 20% and leak customer data sounds like surprisingly rational collective behaviour to me.. probably best to pay for chatbot/coding assistant subscriptions to any employee who wants it since they will use it anyway and “free” tiers are paid by data and integrating any “AI” used to attract investors in the last few years, but do you have statistics that paying customers actually want those AI powered products at nondumping prices? did anyone show any non-self-reported measured increase in productivity (in terms of what the company produces for which their customers pay, not lines of code)? did any early AI-first company other than nvidia report profit numbers instead of just revenue? do early adopters from 5 years ago do better than late adopters from 5 months ago?
tbh, “wait until it starts working” might be a good strategy if there is very little first-mover advantage.. AGI is not here yet, not sure any company can prepare for it by adopting current LLMs “more”
Behind in terms of productivity per engineer and building the next wave of high leverage features.
LLMs haven’t been good enough for long enough to obviously impact bottom lines yet, but the change in the derivative is pretty clear. Even if companies end up producing the same amount using AI tools due to other bottlenecks, AI tools cost less than human labor does to perform the same tasks, and that will result in companies that turn higher profits or are able to make businesses provides goods and services that would be unprofitable without LLMs.
Having been at the same conference: The gap was staggering. On the one side, people & teams who by now have deeply agentic workflows (not just code—individualized workflows across the board were a thing). In the middle serious discussions on rather useless things like “LOC isn’t a good productivity metric, how do we count productivity now”. And a large chunk just very disconnected from what is and isn’t possible, in both directions.
Even if we’re at the top of the S curve (personal take: Probably, at least without fundamental breakthroughs beyond “scale”), there’s massive changes already deeply baked in, and for the unaware teams, it will feel like continued exponential growth just because understanding is slowly trickling down.
The—I think—most realistic take was Bill Coughran comparing it to the dot com boom, the fact that there were a lot of dead bodies left on the ground after that, and that a few well-positioned companies experienced tremendous growth after.
Also a clear theme: Frontline/middle managers see what’s coming much better. Many are struggling hard to do the right thing. “Leadership” is often blissfully unaware and an active blocker/distraction.
Yes, and lots of middle managers can feel they’re being squeezed out (this is always happening to middle management, though, so take with a grain of salt). But some folks really believe that we’ll soon be in a world where line managers can manage many more people with the help of AI and there’ll be less need for layers to coordinate between line managers and executives. I’m somewhat skeptical about overcoming the information bottlenecks, but we’ll see.
Seems mostly true. There’s also a group of people flailing around trying to fit it in their workflows because all the top tech companies are saying it’s the next big thing.
I notice lots of LARPing too with adding the word “AI” to everything hoping that will unlock some new avenues.
> The big struggle is to even start using AI coding assistant tools. Lots of teams just don’t use them at all, or use them in very limited ways. People leading these teams know they are going to lose if they don’t change but are struggling to get their orgs to let them.
It seems to me 25-50% of developers are using some form of AI-assisted coding. Did you notice that the beaureacracy of their companies was not allowing their developers to use coding assistants?
Absolutely. There’s a lot of attempts to build “AI” features, but a lot of these are garbage because no one bothered to think how it would be useful to customers. Seeing a lot of things like custom chat bots that answer questions nobody had, automating tasks nobody cares about getting done, etc.
Best I can tell the number of engineers using LLMs tools like Claude Code, Codex, and Cursor is pretty small, maybe 20%, although this is anecdata and I don’t know what better research says. Lots are using Copilot because it had already been approved and paid for, but if you’ve tried to use Copilot’s LLM features you know they lag behind.
I’m sure plenty of people are finding sneaky ways to use LLMs, though. Easy to have a personal account and ask it your questions, even if you aren’t allowed to use agentic tools.