I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
To me this isn’t clear. Yes, we’re better one-shot learners, but I’d say the most likely explanation is that the human training set is larger and that much of that training set is hidden away in our evolutionary past.
It’s one thing to estimate evolution FLOP (and as Nuño points out, even that is questionable). It strikes me as much more difficult (and even more dubious) to estimate the “number of samples” or “total training signal (bytes)” over one’s lifetime / evolution.
To me this isn’t clear. Yes, we’re better one-shot learners, but I’d say the most likely explanation is that the human training set is larger and that much of that training set is hidden away in our evolutionary past.
It’s one thing to estimate evolution FLOP (and as Nuño points out, even that is questionable). It strikes me as much more difficult (and even more dubious) to estimate the “number of samples” or “total training signal (bytes)” over one’s lifetime / evolution.