That is, given that you get useful work out of AI-driven research before things fall apart (reasonable but not guarnteed).
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
It also relies on work done now giving useful direction, especially if paralelism grows faster than serial speeds.
In short, this says that if time horizons to AI assistance are short, the most important things are A. The framework to be able to verify an approach, so we can hand it off. B. Information about whether it will ultimately be workable.
As always, it seems to bias towards long term approaches where you can do the hard part first.
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
Mainly things that we would never think of, as fruitful for AI and not for us.
Things that are useful for us but not for AI is things like investigating gaps in tokenization, hiding things from AI, and things that are hard to explain/judge, because we probably ought to trust the AI researchers less than we do human researchers with regards to good faith.
That is, given that you get useful work out of AI-driven research before things fall apart (reasonable but not guarnteed).
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
It also relies on work done now giving useful direction, especially if paralelism grows faster than serial speeds.
In short, this says that if time horizons to AI assistance are short, the most important things are A. The framework to be able to verify an approach, so we can hand it off. B. Information about whether it will ultimately be workable.
As always, it seems to bias towards long term approaches where you can do the hard part first.
What is being excluded by this qualification?
Mainly things that we would never think of, as fruitful for AI and not for us.
Things that are useful for us but not for AI is things like investigating gaps in tokenization, hiding things from AI, and things that are hard to explain/judge, because we probably ought to trust the AI researchers less than we do human researchers with regards to good faith.
That seems correct, but I don’t think any of those aren’t useful to investigate with AI, despite the relatively higher bar.