There are a couple of pieces of this that I disagree with:
I think claim 1 is wrong because even if the memory is unhelpful, the agent which uses it might be simpler, and so you might still end up with an agent. My intuition is that just specifying a utility function and an optimization process is often much easier than specifying the complete details of the actual solution, and thus any sort of program-search-based optimization process (e.g. gradient descent in a nn) has a good chance of finding an agent.
I think claim 3 is wrong because agenty solutions exist for all tasks, even classification tasks. For example, take the function which spins up an agent, tasks that agent with the classification task, and then takes that agent’s output. Unless you’ve done something to explicitly remove agents from your search space, this sort of solution always exists.
Thus, I think claim 6 is wrong due to my complaints about claims 1 and 3.
Re: first point, I think this is a difference in intuition about how simple / easy to find agents are in search space. My intuition is that they are would be harder to find than regular functions doing something—I think this is generated by a more general intuition that finding a function that does A is easier than finding a function that does both A and B.
Re: second point, I agree—there will be some agents in the search space. Claim 3 is that if claim 1 and 2 are true, then (for the specified type of task) it is very unlikely that the optimization process will find an agent; however, there is still a nonzero probability that it does.
There are a couple of pieces of this that I disagree with:
I think claim 1 is wrong because even if the memory is unhelpful, the agent which uses it might be simpler, and so you might still end up with an agent. My intuition is that just specifying a utility function and an optimization process is often much easier than specifying the complete details of the actual solution, and thus any sort of program-search-based optimization process (e.g. gradient descent in a nn) has a good chance of finding an agent.
I think claim 3 is wrong because agenty solutions exist for all tasks, even classification tasks. For example, take the function which spins up an agent, tasks that agent with the classification task, and then takes that agent’s output. Unless you’ve done something to explicitly remove agents from your search space, this sort of solution always exists.
Thus, I think claim 6 is wrong due to my complaints about claims 1 and 3.
Re: first point, I think this is a difference in intuition about how simple / easy to find agents are in search space. My intuition is that they are would be harder to find than regular functions doing something—I think this is generated by a more general intuition that finding a function that does A is easier than finding a function that does both A and B.
Re: second point, I agree—there will be some agents in the search space. Claim 3 is that if claim 1 and 2 are true, then (for the specified type of task) it is very unlikely that the optimization process will find an agent; however, there is still a nonzero probability that it does.