I agree that building an interactive model of the supply chain’s labor intensity, and how it has evolved over time would be really impactful piece of work. A few resources that I would take a look at:
This report from Goldman Sachs from 2023 was a good early first pass at estimating the share of tasks within occupations potentially impacted by AI. The actual data tables they referenced aren’t (to my knowledge) publicly available, but their methodology should be replicable with some effort. This McKinsey report published later that year uses a similar methodology and considers the historical trajectory back to 2016.
If you want to going back further in time, the BLS published a helpful guide for mapping O*NET data going back to 1998, which tracks tasks and required skills with occupations, though there are some limitations here. The BLS also released this new data product about skills last year that I haven’t had a chance to explore thoroughly yet.
I’m currently working on making the historic occupational data I used for this analysis of occupational churn going back to 1870 publicly available, hopefully by the end of this month.
Some limitations to be aware of:
In addition to uncertainty about future AI capabilities, there could be considerable variation in how important skills are within each occupation. If AI only partially automates or de-skills an occupation, the extent to which other skills remain a bottleneck is an important question where estimates may be imprecise.
Forward projections may be more helpful to do by industry. The BLS helpfully maintains industry-occupation matrices, but this adds another layer of complexity to the analysis.
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
The idea of human-level skilled labor being completely free is an alluring one, but one that may never be completely true so long as humans maintain agency (likely in my optimistic worldview).
Even if production processes were completely automated, making value judgements about the usefulness of the outputs was useful and where to go next is something that humans will probably want to continue to maintain some level of control over where material resource costs and timeliness still matter (i.e. feedback loops as you mention). The work involved in this decision-making process could be more than many people assume.
A good example of this is how Google claims that 30% of their code is AI-generated, but coding velocity has only increased by 10%. Deciding what work to pursue and evaluating output, particularly in an industry where the outputs are so specialized, is already a substantial percentage of labor that hasn’t been automated to the same degree as coding.
As discussed in our other thread, modeling how much time is required to define requirements/prompt and evaluate output will be an important component of forecasting how far and fast AI advancements might take us. Realistic estimates of this will likely support your hypothesis of the bottlenecks being in R&D and design and coordination, rather than physical throughput limits.