I thought some of the “experts” Gato was trained on were not from-scratch models but rather humans—e.g. images and text generated by humans.
Relatedly, instead of using a model as the “expert” couldn’t you use a human demonstrator? Like, suppose you are training it to control a drone flying through a warehouse. Couldn’t you have humans fly the drones for a bit and then have it train on those demonstrations?
It’s not really any more an “agent” than my hypothetical cloud drive with a bunch of SOTA models on it. Prompting GATO is the equivalent of picking a file from the drive; if I want to do a novel task, I still have to finetune, just as I would with the drive. (A real AGI, even a weak one, would know how to finetune itself, or do the equivalent.)
This is false if significant transfer/generalization starts to happen, right? A drive full of a bunch of SOTA models, plus a rule for deciding what to use, is worse than Gato to the extent that Gato is able to generalize few-shot or zero-shot to new tasks and/or insofar as Gato gets gains from transfer.
EDIT: Meta-comment: I think we are partially just talking past each other here. For example, you think that the question is ‘will it ever reach the Pareto frontier,’ which is definitely not the question I care about.
Meta-comment of my own: I’m going to have to tap out of this conversation after this comment. I appreciate that you’re asking questions in good faith, and this isn’t your fault, but I find this type of exchange stressful and tiring to conduct.
Specifically, I’m writing at the level of exactness/explicitness that I normally expect in research conversations, but it seems like that is not enough here to avoid misunderstandings. It’s tough for me to find the right level of explicitness while avoiding the urge to put thousands of very pedantic words in every comment, just in case.
Re: non-RL training data.
Above, I used “RL policies” as a casual synecdoche for “sources of Gato training data,” for reasons similar to the reasons that this post by Oliver Sourbut focuses on RL/control.
Yes, Gato had other sources of training data, but (1) the RL/control results are the ones everyone is talking about, and (2) the paper shows that the RL/control training data is driving those results (they get even better RL/control outcomes when they drop the other data sources).
Re: gains from transfer..
Yes, if Gato outperforms a particular RL/control policy that generated training data for it, then having Gato is better than merely having that policy, in the case where you want to do its target task.
However, training a Gato is not the only way of reaping gains from transfer. Every time we finetune any model, or use multi-task training, we are reaping gains from transfer. The literature (incl. this paper) robustly shows that we get the biggest gains from transfer when transferring between similar tasks, while distant or unrelated tasks yield no transfer or even negative transfer.
So you can imagine a spectrum ranging from
“pretrain only on one very related task” (i.e. finetuning a single narrow task model), to
“pretraining on a collection of similar tasks” (i.e. multi-task pretraining followed by finetuning), to
“pretrain on every task, even those where you expect no or negative transfer” (i.e. Gato)
The difference between Gato (3) and ordinary multi-task pretraining (2) is that, where the latter would only train with a few closely related tasks, Gato also trains on many other less related tasks.
It would be cool if this helped, and sometimes it does help, as in this paper about training on many modalities at once for multi-modal learning with small transformers. But this is not what the Gato authors found—indeed it’s basically the opposite of what they found.
We could use a bigger model in the hope that will get us some gains from distant transfer (and there is some evidence that this will help), but with the same resources, we could also restrict ourselves to less-irrelevant data and then train a smaller (or same-sized) model on more of it. Gato is at one extreme end of this spectrum, and everything suggests the optimum is somewhere in the interior.
Oliver’s post, which I basically I agree with, has more details on the transfer results.
I thought some of the “experts” Gato was trained on were not from-scratch models but rather humans—e.g. images and text generated by humans.
Relatedly, instead of using a model as the “expert” couldn’t you use a human demonstrator? Like, suppose you are training it to control a drone flying through a warehouse. Couldn’t you have humans fly the drones for a bit and then have it train on those demonstrations?
This is false if significant transfer/generalization starts to happen, right? A drive full of a bunch of SOTA models, plus a rule for deciding what to use, is worse than Gato to the extent that Gato is able to generalize few-shot or zero-shot to new tasks and/or insofar as Gato gets gains from transfer.
EDIT: Meta-comment: I think we are partially just talking past each other here. For example, you think that the question is ‘will it ever reach the Pareto frontier,’ which is definitely not the question I care about.
Meta-comment of my own: I’m going to have to tap out of this conversation after this comment. I appreciate that you’re asking questions in good faith, and this isn’t your fault, but I find this type of exchange stressful and tiring to conduct.
Specifically, I’m writing at the level of exactness/explicitness that I normally expect in research conversations, but it seems like that is not enough here to avoid misunderstandings. It’s tough for me to find the right level of explicitness while avoiding the urge to put thousands of very pedantic words in every comment, just in case.
Re: non-RL training data.
Above, I used “RL policies” as a casual synecdoche for “sources of Gato training data,” for reasons similar to the reasons that this post by Oliver Sourbut focuses on RL/control.
Yes, Gato had other sources of training data, but (1) the RL/control results are the ones everyone is talking about, and (2) the paper shows that the RL/control training data is driving those results (they get even better RL/control outcomes when they drop the other data sources).
Re: gains from transfer..
Yes, if Gato outperforms a particular RL/control policy that generated training data for it, then having Gato is better than merely having that policy, in the case where you want to do its target task.
However, training a Gato is not the only way of reaping gains from transfer. Every time we finetune any model, or use multi-task training, we are reaping gains from transfer. The literature (incl. this paper) robustly shows that we get the biggest gains from transfer when transferring between similar tasks, while distant or unrelated tasks yield no transfer or even negative transfer.
So you can imagine a spectrum ranging from
“pretrain only on one very related task” (i.e. finetuning a single narrow task model), to
“pretraining on a collection of similar tasks” (i.e. multi-task pretraining followed by finetuning), to
“pretrain on every task, even those where you expect no or negative transfer” (i.e. Gato)
The difference between Gato (3) and ordinary multi-task pretraining (2) is that, where the latter would only train with a few closely related tasks, Gato also trains on many other less related tasks.
It would be cool if this helped, and sometimes it does help, as in this paper about training on many modalities at once for multi-modal learning with small transformers. But this is not what the Gato authors found—indeed it’s basically the opposite of what they found.
We could use a bigger model in the hope that will get us some gains from distant transfer (and there is some evidence that this will help), but with the same resources, we could also restrict ourselves to less-irrelevant data and then train a smaller (or same-sized) model on more of it. Gato is at one extreme end of this spectrum, and everything suggests the optimum is somewhere in the interior.
Oliver’s post, which I basically I agree with, has more details on the transfer results.