LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases.
People will instead mostly continue to rely on human employees, because human employees need less management.
These seem like great predictions worth checking. Can you make them more specific (time, likelihood)?
After some more thought, I agree even more. A large part of management is an ad-hoc solution to human alignment. And as I predict agents to be unreliable as long as technical alignment is unsolved, more management by humans will be needed. Still, productivity may increase a lot.
LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases.
People will instead mostly continue to rely on human employees, because human employees need less management.
These seem like great predictions worth checking. Can you make them more specific (time, likelihood)?
Likelihood: maybe 5-30% off the top of my head, obviously depends a lot on operationalization.
Time: however long transformer-based LLMs (trained on prediction + a little RLHF, and minor variations thereon) remain the primary paradigm.
After some more thought, I agree even more. A large part of management is an ad-hoc solution to human alignment. And as I predict agents to be unreliable as long as technical alignment is unsolved, more management by humans will be needed. Still, productivity may increase a lot.