Trying to think of reasons this post might end up being quite wrong, I think the one that feels most likely to me is that these management and agency skills end up being yet another thing that LLMs can do very well very soon. [...]
I’ll take the opposite failure mode: in an absolute sense (as opposed to relative-to-other-humans), all humans have always been thoroughly incompetent at management; it’s impressive that any organization with dedicated managers manages to remain functional at all given how bad they are (again, in an absolute sense). LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases. People will instead mostly continue to rely on human employees, because human employees need less management.
LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases.
People will instead mostly continue to rely on human employees, because human employees need less management.
These seem like great predictions worth checking. Can you make them more specific (time, likelihood)?
After some more thought, I agree even more. A large part of management is an ad-hoc solution to human alignment. And as I predict agents to be unreliable as long as technical alignment is unsolved, more management by humans will be needed. Still, productivity may increase a lot.
I’ll take the opposite failure mode: in an absolute sense (as opposed to relative-to-other-humans), all humans have always been thoroughly incompetent at management; it’s impressive that any organization with dedicated managers manages to remain functional at all given how bad they are (again, in an absolute sense). LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases. People will instead mostly continue to rely on human employees, because human employees need less management.
LLMs are even more bottlenecked on management than human organizations are, and therefore LLMs will be less useful than human organizations in practice for most use cases.
People will instead mostly continue to rely on human employees, because human employees need less management.
These seem like great predictions worth checking. Can you make them more specific (time, likelihood)?
Likelihood: maybe 5-30% off the top of my head, obviously depends a lot on operationalization.
Time: however long transformer-based LLMs (trained on prediction + a little RLHF, and minor variations thereon) remain the primary paradigm.
After some more thought, I agree even more. A large part of management is an ad-hoc solution to human alignment. And as I predict agents to be unreliable as long as technical alignment is unsolved, more management by humans will be needed. Still, productivity may increase a lot.