But the latter doesn’t really require LLMs to be capable of end-to-end autonomous task execution, which is the property required for actual transformative consequences.
I’m glad we agree on which property is required (and I’d say basically sufficient at this point) for actual transformative consequences.
How do you know it’s sufficient? Is it not salient to you primarily because it is the current bottleneck?
If “task execution” includes execution of a wide enough class of tasks, obviously the claim becomes trivially true. If it is interpreted more reasonably, I think it is probably false.
I’m glad we agree on which property is required (and I’d say basically sufficient at this point) for actual transformative consequences.
How do you know it’s sufficient? Is it not salient to you primarily because it is the current bottleneck?
If “task execution” includes execution of a wide enough class of tasks, obviously the claim becomes trivially true. If it is interpreted more reasonably, I think it is probably false.