This is a bit of a separate question, but it’s one I’m very interested in. I think the advantages of general problem-solving abilities will be so large that progress in that direction is inevitable. It would be great if we had “agents” only in the limited sense that they could use tools, but without the ability to work autonomously on long time-horizon tasks and solve novel problems—like just for a random example, solving the novel problem “how do I make sure humans don’t interfere in my new grand plan?”
This is a bit of a separate question, but it’s one I’m very interested in. I think the advantages of general problem-solving abilities will be so large that progress in that direction is inevitable. It would be great if we had “agents” only in the limited sense that they could use tools, but without the ability to work autonomously on long time-horizon tasks and solve novel problems—like just for a random example, solving the novel problem “how do I make sure humans don’t interfere in my new grand plan?”