But aren’t a lot of your tasks the sort of thing where
there is in fact a ton of training-available data demonstrating good performance
it’s cheap to experiment
etc., other relevant peculiarities of your use cases
?
I think the claim might be true but I don’t see a super compelling reason to think so at the moment.
“Reasoning” helping with self-driving cars might be a compelling demo, but what it would be compelling about is “you can slap together robotics, big data for a specific domain, and some LLM reasoning stuff to duct tape some more of the decision-making, and get something that’s practically useful”. Generalizing to other robotics could kick off a revolution, but it would be slow-going I think?
There could be a fair amount of science overhang, where you just have to search hard enough to put X and needs-X together. E.g. people curing themselves by searching hard using LLMs. Exciting, but not an industrial revolution? In the grand scheme of science it’s not mostly that. A lot of the coolest stuff is really hard, which means there’s not that many people at the forefront, which means that people at the forefront are already familiar with a lot of what’s relevant.
If you can find domains where iteration can be done pretty automatedly, but it’s expensive enough that decision-making still matters, but decision-making is very cognitively costly, but getting kinda-okay-not-creative decision-making would still be quantitatively better, then you could unlock some sort of new paradigm of invention / discovery. E.g. automated labs running automated experiments designing proteins by gippity-tweaking, or similar. Like PACE. But that would also be hard to get started on.
What are other reasons to think this? Plausible I just haven’t seen the idea, haven’t tried too hard.
But aren’t a lot of your tasks the sort of thing where
there is in fact a ton of training-available data demonstrating good performance
it’s cheap to experiment
etc., other relevant peculiarities of your use cases
?
I think the claim might be true but I don’t see a super compelling reason to think so at the moment.
“Reasoning” helping with self-driving cars might be a compelling demo, but what it would be compelling about is “you can slap together robotics, big data for a specific domain, and some LLM reasoning stuff to duct tape some more of the decision-making, and get something that’s practically useful”. Generalizing to other robotics could kick off a revolution, but it would be slow-going I think?
There could be a fair amount of science overhang, where you just have to search hard enough to put X and needs-X together. E.g. people curing themselves by searching hard using LLMs. Exciting, but not an industrial revolution? In the grand scheme of science it’s not mostly that. A lot of the coolest stuff is really hard, which means there’s not that many people at the forefront, which means that people at the forefront are already familiar with a lot of what’s relevant.
If you can find domains where iteration can be done pretty automatedly, but it’s expensive enough that decision-making still matters, but decision-making is very cognitively costly, but getting kinda-okay-not-creative decision-making would still be quantitatively better, then you could unlock some sort of new paradigm of invention / discovery. E.g. automated labs running automated experiments designing proteins by gippity-tweaking, or similar. Like PACE. But that would also be hard to get started on.
What are other reasons to think this? Plausible I just haven’t seen the idea, haven’t tried too hard.