It’s true there is a bit of a tension. Some thoughts:
Specialization is IMO really not the key component of most assembly lines. I am mostly familiar with automated assembly lines, where maintenance workers tend to be generalists and work on a wide variety of machines, but I expect even in largely human-driven assembly lines, the time it takes to learn a new station tends to be on the order of days (given basic proficiency in the kind of thing that the factory is producing), and you continuously try to make the task be doable by more people.
A thing that is a much bigger aspect of assembly lines is “focus”, which plays out on the scale of hours, not at the scale of weeks. Switching tasks requires switching cognitive contexts. Being given a station for a day means you can focus on a single task. But in order to optimize how you perform that task at a high level you probably want to work many stations on different days so you can see how work at your station affects the work at other stations!
When I am imagining the workshop snake, it’s not super clear how much benefit you get from specialization, though maybe! I am imagining most stations do not involve humans at all, but instead involve doing exercises or reading something. Humans are not great implements in a teaching pipeline. You will definitely need them sometimes, but you were probably imagining a setup with many more humans than I was.
Beyond that, it’s great if you do end up with a process that produces value by creating copies of the exact same widget, that take the exact same time, and if you do that, you can tolerate a lot of specialization, but of course things rarely work out that way. Especially in anything software adjacent, where you have software handle the uniform parts and so the labor you are trying to optimize is left with the heterogenous parts.
So what do you do if different stations randomly take longer or shorter, and your work product is very non-uniform and so requires different amounts of inputs from different stations each time? You need to find ways to transfer capacity from different stations on the assembly lines to other stations on the assembly line. This is what generalist labor is about. Generalist labor provides the slack in the system that allows it to maintain high throughput and efficiency.
At Lightcone, the way this plays out is that our core team is a lot of generalists, and then when we do end up having really locked down what a task is like and where it fits, we do hire labor to actually perform it every day who don’t need to be generalists. Lighthaven in terms of maintenance and restocking and cleaning is not done operated by generalists, we hire contractors and specialized staff for that.
Another thing I maybe under-emphasized in the post is that the key criterion you want to optimize for is total production time. The whole point of single piece flow and small batches is to maximize the speed at which you get feedback about the consequences of your actions. The depth of your assembly line is a cost to that! Every time your assembly line gets longer, you have to wait longer to see the consequences on the final product.
Now, that’s a big of high level on the tension here, but I realized I didn’t really explain a good model of why narrow specialization in a non-uniform work environment pushes heavily towards big batches and waterfall planning.
Let’s say you have an assembly line with 4 stations (A, B, C, and D), with a lot of variance at each station about how long work at that station takes. Let’s talk about a few different scenarios:
1. You have 4 employees each capable of only performing the job of one station.
Now, let’s say station A takes twice as long this time. This means station B, C and D will now be idle 50% of the time. The total efficiency of your process is 12.
[Ok, I have to run, but I’ll edit this comment with the full explanation later, though maybe it’s already clear]
Even in real assembly lines at physical factories, I get the impression that generalization has often worked well because it gives you the ability to change your process. Toyota is/was considered best-in-class, and a major innovation of theirs was having workers rotate across different areas, becoming more generalized and more able to suggest improvements to the overall system, with some groups rotating every 2 hours[1].
Tesla famously reduced automation around 2018 even when the marginal costs were lower than human operators, again because the lost flexibility wasn’t worth it.[2] Though it’s worth noting they started investing more in robots again in recent years, presumably when their process was more solidified[3].
It’s true there is a bit of a tension. Some thoughts:
Specialization is IMO really not the key component of most assembly lines. I am mostly familiar with automated assembly lines, where maintenance workers tend to be generalists and work on a wide variety of machines, but I expect even in largely human-driven assembly lines, the time it takes to learn a new station tends to be on the order of days (given basic proficiency in the kind of thing that the factory is producing), and you continuously try to make the task be doable by more people.
A thing that is a much bigger aspect of assembly lines is “focus”, which plays out on the scale of hours, not at the scale of weeks. Switching tasks requires switching cognitive contexts. Being given a station for a day means you can focus on a single task. But in order to optimize how you perform that task at a high level you probably want to work many stations on different days so you can see how work at your station affects the work at other stations!
When I am imagining the workshop snake, it’s not super clear how much benefit you get from specialization, though maybe! I am imagining most stations do not involve humans at all, but instead involve doing exercises or reading something. Humans are not great implements in a teaching pipeline. You will definitely need them sometimes, but you were probably imagining a setup with many more humans than I was.
Beyond that, it’s great if you do end up with a process that produces value by creating copies of the exact same widget, that take the exact same time, and if you do that, you can tolerate a lot of specialization, but of course things rarely work out that way. Especially in anything software adjacent, where you have software handle the uniform parts and so the labor you are trying to optimize is left with the heterogenous parts.
So what do you do if different stations randomly take longer or shorter, and your work product is very non-uniform and so requires different amounts of inputs from different stations each time? You need to find ways to transfer capacity from different stations on the assembly lines to other stations on the assembly line. This is what generalist labor is about. Generalist labor provides the slack in the system that allows it to maintain high throughput and efficiency.
At Lightcone, the way this plays out is that our core team is a lot of generalists, and then when we do end up having really locked down what a task is like and where it fits, we do hire labor to actually perform it every day who don’t need to be generalists. Lighthaven in terms of maintenance and restocking and cleaning is not done operated by generalists, we hire contractors and specialized staff for that.
Another thing I maybe under-emphasized in the post is that the key criterion you want to optimize for is total production time. The whole point of single piece flow and small batches is to maximize the speed at which you get feedback about the consequences of your actions. The depth of your assembly line is a cost to that! Every time your assembly line gets longer, you have to wait longer to see the consequences on the final product.
Now, that’s a big of high level on the tension here, but I realized I didn’t really explain a good model of why narrow specialization in a non-uniform work environment pushes heavily towards big batches and waterfall planning.
Let’s say you have an assembly line with 4 stations (A, B, C, and D), with a lot of variance at each station about how long work at that station takes. Let’s talk about a few different scenarios:
1. You have 4 employees each capable of only performing the job of one station.
Now, let’s say station A takes twice as long this time. This means station B, C and D will now be idle 50% of the time. The total efficiency of your process is 12.
[Ok, I have to run, but I’ll edit this comment with the full explanation later, though maybe it’s already clear]
Even in real assembly lines at physical factories, I get the impression that generalization has often worked well because it gives you the ability to change your process. Toyota is/was considered best-in-class, and a major innovation of theirs was having workers rotate across different areas, becoming more generalized and more able to suggest improvements to the overall system, with some groups rotating every 2 hours[1].
Tesla famously reduced automation around 2018 even when the marginal costs were lower than human operators, again because the lost flexibility wasn’t worth it.[2] Though it’s worth noting they started investing more in robots again in recent years, presumably when their process was more solidified[3].
[1]: https://michelbaudin.com/2024/02/07/toyotas-job-rotation-policy/
[2]: https://theconversation.com/teslas-problem-overestimating-automation-underestimating-humans-95388
[3]: https://newo.ai/tesla-optimus-robots-revolutionize-manufacturing/
Appreciate the empirical data! This aligns with my models in the space.