I think a different phenomenon is occuring. My guess, updating on my own experience, is that ideas aren’t the current bottleneck. 1% inspiration, 99% perspiration.
As someone who has been reading 3-20 papers per month for many years now, in neuroscience and machine learning, I feel overwhelmed with ideas. I average about 0.75 per paper. I write them down, and the lists grow faster than they shrink by two orders of magnitude.
When I was on my favorite industry team, what I most valued about my technical manager was his ability to help me sort through and prioritize them. It was like I created a bunch of LEGO pieces, he picked one to be next, I put it in place by coding it up, he checked the placement by reviewing my PR. If someone has offered me a source of ideas ranging in quality between worse than my worst ideas, and almost as good as my best ideas, and skewed towards bad… I’d have laughed and turned them down without a second thought.
For something like a paper instead of a minor tech idea for 1 week PR… The situation is far more intense. The grunt work of running the experiments and preparing the paper is enormous compared to the time and effort of coming up with the idea in the first place. More like 0.1% to 99.9%.
Current LLMs can speed up creating a paper if given the results and experiment description to write about. That’s probably also not the primary bottleneck (although still more than idea generation).
So the current bottleneck, in my estimation, for ml experiments, is the experiments. Coding up the experiments accurately and efficiently, running them (and handling the compute costs), analyzing the results.
So I’ve been expecting to see an acceleration dependent on that aspect. That’s hard to measure though. Are LLMs currently speeding this work up a little? Probably. I’ve had my work sped up some by the recent Sonnet 3.5.1. Currently though it’s a trade-off, there’s overhead in checking for misinterpretations and correcting bugs. We still seem a long way in “capability space” from me being able to give a background paper and rough experiment description, and then having the model do the rest. Only once that’s the case will idea generation become my bottleneck.
That’s the opposite of my experience. Nearly all the papers I read vary between “trash, I got nothing useful out besides an idea for a post explaining the relevant failure modes” and “high quality but not relevant to anything important”. Setting up our experiments is historically much faster than the work of figuring out what experiments would actually be useful.
There are exceptions to this, large projects which seem useful and would require lots of experimental work, but they’re usually much lower-expected-value-per-unit-time than going back to the whiteboard, understanding things better, and doing a simpler experiment once we know what to test.
Ah, well, for most papers that spark an idea in me, the idea isn’t simply an extension of the paper. It’s a question tangentially related which probes at my own frontier of understanding.
I’ve always found that a boring lecture is a great opportunity to brainstorm because my mind squirms away from the boredom into invention and extrapolation of related ideas. A boring paper does some of the same for me, except that I’m less socially pressured to keep reading it, and thus less able to squeeze my mind with the boredom of it.
As for coming up with ideas… It is a weakness of mind that I am far better at generating ideas than at critiquing them (my own or others). Which is why I worked so well in a team where I had someone I trusted to sort through my ideas and pick out the valuable ones. It sounds to me like you have a better filter on idea quality.
That’s mostly my experience as well: experiments are near-trivial to set up, and setting up any experiment that isn’t near-trivial to set up is a poor use of the time that can instead be spent thinking on the topic a bit more and realizing what the experimental outcome would be or why this would be entirely the wrong experiment to run.
But the friction costs of setting up an experiment aren’t zero. If it were possible to sort of ramble an idea at an AI and then have it competently execute the corresponding experiment (or set up a toy formal model and prove things about it), I think this would be able to speed up even deeply confused/non-paradigmatic research.
… That said, I think the sorts of experiments we do aren’t the sorts of experiments ML researchers do. I expect they’re often things like “do a pass over this lattice of hyperparameters and output the values that produce the best loss” (and more abstract equivalents of this that can’t be as easily automated using mundane code). And which, due to the atheoretic nature of ML, can’t be “solved in the abstract”.
So ML research perhaps could be dramatically sped up by menial-software-labor AIs. (Though I think even now the compute needed for running all of those experiments would be the more pressing bottleneck.)
Consider: https://www.cognitiverevolution.ai/can-ais-generate-novel-research-ideas-with-lead-author-chenglei-si/
I think a different phenomenon is occuring. My guess, updating on my own experience, is that ideas aren’t the current bottleneck. 1% inspiration, 99% perspiration.
As someone who has been reading 3-20 papers per month for many years now, in neuroscience and machine learning, I feel overwhelmed with ideas. I average about 0.75 per paper. I write them down, and the lists grow faster than they shrink by two orders of magnitude.
When I was on my favorite industry team, what I most valued about my technical manager was his ability to help me sort through and prioritize them. It was like I created a bunch of LEGO pieces, he picked one to be next, I put it in place by coding it up, he checked the placement by reviewing my PR. If someone has offered me a source of ideas ranging in quality between worse than my worst ideas, and almost as good as my best ideas, and skewed towards bad… I’d have laughed and turned them down without a second thought.
For something like a paper instead of a minor tech idea for 1 week PR… The situation is far more intense. The grunt work of running the experiments and preparing the paper is enormous compared to the time and effort of coming up with the idea in the first place. More like 0.1% to 99.9%.
Current LLMs can speed up creating a paper if given the results and experiment description to write about. That’s probably also not the primary bottleneck (although still more than idea generation).
So the current bottleneck, in my estimation, for ml experiments, is the experiments. Coding up the experiments accurately and efficiently, running them (and handling the compute costs), analyzing the results.
So I’ve been expecting to see an acceleration dependent on that aspect. That’s hard to measure though. Are LLMs currently speeding this work up a little? Probably. I’ve had my work sped up some by the recent Sonnet 3.5.1. Currently though it’s a trade-off, there’s overhead in checking for misinterpretations and correcting bugs. We still seem a long way in “capability space” from me being able to give a background paper and rough experiment description, and then having the model do the rest. Only once that’s the case will idea generation become my bottleneck.
That’s the opposite of my experience. Nearly all the papers I read vary between “trash, I got nothing useful out besides an idea for a post explaining the relevant failure modes” and “high quality but not relevant to anything important”. Setting up our experiments is historically much faster than the work of figuring out what experiments would actually be useful.
There are exceptions to this, large projects which seem useful and would require lots of experimental work, but they’re usually much lower-expected-value-per-unit-time than going back to the whiteboard, understanding things better, and doing a simpler experiment once we know what to test.
Ah, well, for most papers that spark an idea in me, the idea isn’t simply an extension of the paper. It’s a question tangentially related which probes at my own frontier of understanding.
I’ve always found that a boring lecture is a great opportunity to brainstorm because my mind squirms away from the boredom into invention and extrapolation of related ideas. A boring paper does some of the same for me, except that I’m less socially pressured to keep reading it, and thus less able to squeeze my mind with the boredom of it.
As for coming up with ideas… It is a weakness of mind that I am far better at generating ideas than at critiquing them (my own or others). Which is why I worked so well in a team where I had someone I trusted to sort through my ideas and pick out the valuable ones. It sounds to me like you have a better filter on idea quality.
That’s mostly my experience as well: experiments are near-trivial to set up, and setting up any experiment that isn’t near-trivial to set up is a poor use of the time that can instead be spent thinking on the topic a bit more and realizing what the experimental outcome would be or why this would be entirely the wrong experiment to run.
But the friction costs of setting up an experiment aren’t zero. If it were possible to sort of ramble an idea at an AI and then have it competently execute the corresponding experiment (or set up a toy formal model and prove things about it), I think this would be able to speed up even deeply confused/non-paradigmatic research.
… That said, I think the sorts of experiments we do aren’t the sorts of experiments ML researchers do. I expect they’re often things like “do a pass over this lattice of hyperparameters and output the values that produce the best loss” (and more abstract equivalents of this that can’t be as easily automated using mundane code). And which, due to the atheoretic nature of ML, can’t be “solved in the abstract”.
So ML research perhaps could be dramatically sped up by menial-software-labor AIs. (Though I think even now the compute needed for running all of those experiments would be the more pressing bottleneck.)
Convincing.