Yes, good questions, but I think there are convincing answers. Here’s a shot:
1. Some kinds of data can be created this way, like parallel corpora for translation or video annotated with text. But I think it’s selection bias that it seems like most cases are like this. Most of the cases we’re familiar with seem like this because this is what’s easy to do! But transformative tasks are hard, and creating data that really contains latent in it the general structures necessary for task performance, that is also hard. I’m not saying research can’t solve it, but that if you want to estimate a timeline, you can’t consign this part of the puzzle to a footnote of the form “lots of research resources will solve it”. Or, if you do, you might as well relax the whole project and bring only that level of precision across the board.
2. At least in NLP (the AI subfield with which I’m most familiar), my sense of the field’s zeitgeist is quite contrary to “compute is the issue”. I think there’s a large, maybe majority current of thought that our current benchmarks are crap, that performance on them doesn’t relate to any interesting real-world task, that optimizing on them is of really unclear value, and that the field as a whole is unfortunately rudderless right now. I think this current holds true among many young DL researchers, not just the Gary Marcuses of the world. That’s not a formal survey or anything, just my sense from reading NLP papers and twitter. But similarly, I think the notion that compute is the bottleneck is overrepresented in the LessWrong sphere, vs. what others think.
3. Humans not needing much data is misleading IMO because the human brain comes highly optimized out of the box at birth, and indeed that’s the result of a big evolutionary process. To be clear, I agree achieving human-level AI is enough to count as transformative and may well be a second-long epoch on the way to much more powerful AI. But anyway, you have basically the same question to answer there. Namely, I’d still object that Bio Anchors doesn’t address the datasets/environments issue regarding making even just human-level AI. Changing the scope to “merely” human doesn’t answer the objection.
Q/A. As for recent progress: no, I think there has been very little! I’m only really familiar with NLP, so there might be more in the RL environments. (My very vague sense of RL is that it’s still just “video games you can put an agent is” and basically always has been, but don’t take it from me.) As for NLP, there is basically nothing new in the last 10 years. We have lots of unlabeled text for language models, we have parallel corpora for translation, and we have labeled datasets for things like question-answering (see here for a larger list of supervised tasks). I think it’s really unclear whether any of these have latent in them the structures necessary for general language understanding. GPT is the biggest glimmer of hope recently, but part of the problem even there is we can’t even really quantify how close it is to general language understanding. We don’t have a good way of measuring this! Without it, we certainly can’t train, as we can’t compute a loss function. I think there are maybe some arguments that, in the limit, unlabeled text with the LM objective is enough: but that limit might really be more text than can fit on earth, and we’d need to get a handle on that for any estimates.
Final point: I’m more looking for a qualitative acknowledgement that this problem of datasets/environments is hard and unsolved (or arguments for why it isn’t), is as important as compute, and, building on that, serious attention paid to an analysis of what it would take to make the right datasets/environments. Rather than consign it to an “everything else” parameter, analyze what it might take to make better datasets/environments, including trying to get a handle on whether we even know how. I think this would make for a much better analysis, and would address some of Eliezer’s concerns because it would cover more of the specific, mechanistic story about the path to creating transformative AI.
(Full disclosure: I’ve personally done work on making better NLP benchmarks, which I guess has given me an appreciation for how hard and unsolved this problem feels. So, discount appropriately.)
Caveating that I did a lot of skimming on both Bio Anchors and Eliezer’s response, the part of Bio Anchors that seemed weakest to me was this:
I think the existence of proper datasets/environments is a huge issue for current ML approaches, and you have to assign some nontrivial weight to it being a much bigger bottleneck than computational resources. Like, we’re lucky that GPT-3 is trained with the LM objective (predict the next word) for which there is a lot of naturally-occurring training data (written text). Lucky, because that puts us in a position where there’s something obvious to do with additional compute. But if we hit a limit following that approach (and I think it’s plausible that the signal is too weak in otherwise-unlabeled text) then we’re rather stuck. Thus, to get timelines, we’d also need to estimate what dataset/environments are necessary for training AGI. But I’m not sure we know what these datasets/environments look like. An upper bound is “the complete history of earth since life emerged”, or something… not sure we know any better.
I think parts of Eliezer’s response intersects with this concern, e.g. the energy use analogy. It is the same sort of question, how well do we know what the missing ingredients are? Do we know that compute doesn’t occupy enough of the surface area of possible bottlenecks for a compute-based analysis to be worth much? And I’m specifically suggesting that environments/datasets occupy enough of that surface area to seriously undermine the analysis.
Does Bio Anchors deal with this concern beyond the brief mention above (and I missed it, very possible)? Or are there other arguments out there that suggest compute really is all that’s missing?