I forecast job displacement risk using a hazard framing, and the model separates technical feasibility (when AI can reliably complete enough of your task buckets) from actual job loss (implementation delay summed with a compression hazard). Your slope-vs-intercept idea suggests the step change isn’t “is AI smart enough,” but “can AI run the workflow end-to-end with minimal supervision.” The discontinuity I should be watching for is the point where long-horizon agency improves enough that the AI stops needing constant scaffolding and quality control, and displacement risk stops rising smoothly.
I think this means I should separate “baseline competence” from “time-scaling/agency” inside the model. Intercept gains make bounded tasks cheaper and easier to automate, but slope gains change the structure of the job by letting the system coordinate, recover from errors, and keep goals stable over long horizons. That second component is what could make the hazard curve steepen in the tail, even if early evidence looks well-fit by an exponential.
It also makes me want to treat friction as partially endogenous rather than fixed. If slope is improving, the organization gets something that requires less humans-in-the-loops. That would reduce effective adoption drag and accelerate workforce compression once agency crosses a threshold.
You raise some good points here and I think they’re worth integrating into a project I’ve been working on (https://dontloseyourjob.com/method/)
I forecast job displacement risk using a hazard framing, and the model separates technical feasibility (when AI can reliably complete enough of your task buckets) from actual job loss (implementation delay summed with a compression hazard). Your slope-vs-intercept idea suggests the step change isn’t “is AI smart enough,” but “can AI run the workflow end-to-end with minimal supervision.” The discontinuity I should be watching for is the point where long-horizon agency improves enough that the AI stops needing constant scaffolding and quality control, and displacement risk stops rising smoothly.
I think this means I should separate “baseline competence” from “time-scaling/agency” inside the model. Intercept gains make bounded tasks cheaper and easier to automate, but slope gains change the structure of the job by letting the system coordinate, recover from errors, and keep goals stable over long horizons. That second component is what could make the hazard curve steepen in the tail, even if early evidence looks well-fit by an exponential.
It also makes me want to treat friction as partially endogenous rather than fixed. If slope is improving, the organization gets something that requires less humans-in-the-loops. That would reduce effective adoption drag and accelerate workforce compression once agency crosses a threshold.