Large Language Models, Small Labor Market Effects?
We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark. AI chatbots are now widespread—most employers encourage their use, many deploy in-house models, and training initiatives are common. These firm-led investments boost adoption, narrow demographic gaps in take-up, enhance workplace utility, and create new job tasks. Yet, despite substantial investments, economic impacts remain minimal. Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 2.8%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.
What does this crowd think? These effects are surprisingly small. Do we believe these effects? Anecdotally the effect of LLMs has been enormous for my own workflow and colleagues. How can this be squared with the supposedly tiny labor market effect?
Just to add another data point, as a software engineer, I also find it hard to extract utility from LLMs. (And this has not been for a lack of trying, e.g. at work we are being pushed to use LLM enabled IDEs.) I am constantly surprised to hear when people on the internet say that LLMs are a significant productivity boost for them.
My current model is that LLMs are better if you are working on some mainstream problem domain using a mainstream tech stack (language, library, etc.). This is approximately JavaScript React frontend development in my mind, and as you move away from that the less useful LLMs get. (The things I usually work on are using a non-mainstream tech stack and/or have a non-mainstream problem domain (but in my mind all interesting problems are non-mainstream in that sense), so this would explain my lack of success.)
My current model is that LLMs are better if you are working on some mainstream problem domain using a mainstream tech stack (language, library, etc.).
Yes, I have the same impression. Generating Java or Python code using popular libraries: mostly okay. Generating Lean code: does not compile even after several attempts to fix the code by feeding the compiler errors to LLM.
I use LLMs daily yet I still am not sure they really help all that much with the core productivity bottlenecks. I worry they lower the barrier to excessive perfectionism and “vibe coding” or “vibe learning.” They seem to short-circuit the theory-practice gap by giving users instant but unreliable and often inextensible results.
My fear is that they’ll raise expectations about productivity gains (because AI-assisted workers can bring immediate results in more quickly to a higher apparent standard of polish), while drastically reducing the knowledge gain by the workers about the problem domain. For example, workers may be able to whip up a codebase more quickly but have less familiarity with it at the end of the process, making it much more difficult to make modifications efficiently. Essentially, I suspect AI will generate massive technical debt in exchange for short-term wins, and that bad incentives will tend to perpetuate this in organizations. People will quickly set up new systems using AI, take credit, and exit those projects before serious problems become apparent.
Judging merely from the abstract, the study seems a little bit of a red herring to me: 1. Barely anyone talks about “imminent labor market transformation”, instead we say, it may soon turn things upside down. And the study can only show past changes.
2. That “imminent” vs. “soon” may feel like nitpicking but it’s crucial: Current tools the way they are currently used, are not yet what completely replaces so many workers 1:1, but if you look at the innovative developments overall, the immense human-labor-replacing capacity seems rather obvious.
Consider as an example a hypothetical ‘usual’ programmer at a ‘usual’ company. Would you have strong expectations for her salary to have changed much just because in the past 1-2 years we have been able to have her become faster at coding? Not necessarily, in fact, as we cannot do the coding fully without her yet, it might be for now the value of her marginal product of labor is a bit greater, or maybe a bit lower but AI boom anyway means an IT demand explosion in the near term, so seeing little net effect is surely not any particular surprise, for now. Or the study writer. Language improves, maybe some reasoning in the studies slightly, but habits of how we commission and overall organize, conduct studies haven’t changed yet at all; she also has kept her job so far. Or teaching. I’m still teaching just as much as I did 2y ago, of course. The students are still in the same program that they started 2y ago. 80% of incoming students are somewhat ignorant, 20% somewhat concerned about what AI will mean to their studies, but there’s no known alternative to them yet than to follow the usual path. We’re now starting to reduce contact time at my uni not least due to digital tech, so this may change soon. But, so, until yesterday: +- same old seemingly; no major changes so far on that front either, when one just looks at aggregate macroeconomic data. But this not least reflects the 2 or so years since the large LLMS have broken through; 1 or so year since people have widely started to really use them; is a short time, so we see nothing much in most domains quite yet.
Look at microlevel details, and I’m sure you find already quite a few hints of what might be coming though really, expecting to see much more ‘soon’ish than ‘right now already’.
Silicon valley is full of hype about imminent labor market transformation right now. For example, the Shopify CEO sent out a memo which included stuff like “Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.” And now boards are pushing for that sort of policy in lots of other companies as well.
Disclaimer: As always, views expressed are my own and do not necessarily reflect those of my employer.
Speaking from the perspective of someone still developing basic mathematical maturity and often lacking prerequisites, it’s very useful as a learning aid. For example, it significantly expanded the range of papers or technical results accessible to me. If I’m reading a paper containing unfamiliar math, I no longer have to go down the rabbit hole of tracing prerequisite dependencies, which often expand exponentially (partly because I don’t know which results or sections in the prerequisite texts are essential, making it difficult to scope my focus). Now I can simply ask the LLM for a self-contained exposition. Using traditional means of self-studying like [search engines / Wikipedia / StackExchange] is very often no match for this task, mostly in terms of time spent or wasted effort; simply having someone I can directly ask my highly specific (and often dumb) questions or confusions and receive equally specific responses is just really useful.
From marginal revolution.
What does this crowd think? These effects are surprisingly small. Do we believe these effects? Anecdotally the effect of LLMs has been enormous for my own workflow and colleagues. How can this be squared with the supposedly tiny labor market effect?
Are we that selected of a demographic?
Anecdotally, the effect of LLMs on my workflow hasn’t been very large.
Just to add another data point, as a software engineer, I also find it hard to extract utility from LLMs. (And this has not been for a lack of trying, e.g. at work we are being pushed to use LLM enabled IDEs.) I am constantly surprised to hear when people on the internet say that LLMs are a significant productivity boost for them.
My current model is that LLMs are better if you are working on some mainstream problem domain using a mainstream tech stack (language, library, etc.). This is approximately JavaScript React frontend development in my mind, and as you move away from that the less useful LLMs get. (The things I usually work on are using a non-mainstream tech stack and/or have a non-mainstream problem domain (but in my mind all interesting problems are non-mainstream in that sense), so this would explain my lack of success.)
Yes, I have the same impression. Generating Java or Python code using popular libraries: mostly okay. Generating Lean code: does not compile even after several attempts to fix the code by feeding the compiler errors to LLM.
I use LLMs daily yet I still am not sure they really help all that much with the core productivity bottlenecks. I worry they lower the barrier to excessive perfectionism and “vibe coding” or “vibe learning.” They seem to short-circuit the theory-practice gap by giving users instant but unreliable and often inextensible results.
My fear is that they’ll raise expectations about productivity gains (because AI-assisted workers can bring immediate results in more quickly to a higher apparent standard of polish), while drastically reducing the knowledge gain by the workers about the problem domain. For example, workers may be able to whip up a codebase more quickly but have less familiarity with it at the end of the process, making it much more difficult to make modifications efficiently. Essentially, I suspect AI will generate massive technical debt in exchange for short-term wins, and that bad incentives will tend to perpetuate this in organizations. People will quickly set up new systems using AI, take credit, and exit those projects before serious problems become apparent.
Judging merely from the abstract, the study seems a little bit of a red herring to me:
1. Barely anyone talks about “imminent labor market transformation”, instead we say, it may soon turn things upside down. And the study can only show past changes.
2. That “imminent” vs. “soon” may feel like nitpicking but it’s crucial: Current tools the way they are currently used, are not yet what completely replaces so many workers 1:1, but if you look at the innovative developments overall, the immense human-labor-replacing capacity seems rather obvious.
Consider as an example a hypothetical ‘usual’ programmer at a ‘usual’ company. Would you have strong expectations for her salary to have changed much just because in the past 1-2 years we have been able to have her become faster at coding? Not necessarily, in fact, as we cannot do the coding fully without her yet, it might be for now the value of her marginal product of labor is a bit greater, or maybe a bit lower but AI boom anyway means an IT demand explosion in the near term, so seeing little net effect is surely not any particular surprise, for now. Or the study writer. Language improves, maybe some reasoning in the studies slightly, but habits of how we commission and overall organize, conduct studies haven’t changed yet at all; she also has kept her job so far. Or teaching. I’m still teaching just as much as I did 2y ago, of course. The students are still in the same program that they started 2y ago. 80% of incoming students are somewhat ignorant, 20% somewhat concerned about what AI will mean to their studies, but there’s no known alternative to them yet than to follow the usual path. We’re now starting to reduce contact time at my uni not least due to digital tech, so this may change soon. But, so, until yesterday: +- same old seemingly; no major changes so far on that front either, when one just looks at aggregate macroeconomic data. But this not least reflects the 2 or so years since the large LLMS have broken through; 1 or so year since people have widely started to really use them; is a short time, so we see nothing much in most domains quite yet.
Look at microlevel details, and I’m sure you find already quite a few hints of what might be coming though really, expecting to see much more ‘soon’ish than ‘right now already’.
Silicon valley is full of hype about imminent labor market transformation right now. For example, the Shopify CEO sent out a memo which included stuff like “Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.” And now boards are pushing for that sort of policy in lots of other companies as well.
Disclaimer: As always, views expressed are my own and do not necessarily reflect those of my employer.
What were the biggest boosts that you and your colleagues got from LLMs?
Speaking from the perspective of someone still developing basic mathematical maturity and often lacking prerequisites, it’s very useful as a learning aid. For example, it significantly expanded the range of papers or technical results accessible to me. If I’m reading a paper containing unfamiliar math, I no longer have to go down the rabbit hole of tracing prerequisite dependencies, which often expand exponentially (partly because I don’t know which results or sections in the prerequisite texts are essential, making it difficult to scope my focus). Now I can simply ask the LLM for a self-contained exposition. Using traditional means of self-studying like [search engines / Wikipedia / StackExchange] is very often no match for this task, mostly in terms of time spent or wasted effort; simply having someone I can directly ask my highly specific (and often dumb) questions or confusions and receive equally specific responses is just really useful.