It’s not entirely clear to me that the math works out for AIs being helpful on net relative to humans just doing it, because of the supervision required, and the trust and misalignment issues.
But on this question (for AIs that are just capable of “prosaic and relatively unenlightened ML research”) it feels like shot-in-the-dark guesses. It’s very unclear to me what is and isn’t possible.
It’s not entirely clear to me that the math works out for AIs being helpful on net relative to humans just doing it, because of the supervision required, and the trust and misalignment issues.
But on this question (for AIs that are just capable of “prosaic and relatively unenlightened ML research”) it feels like shot-in-the-dark guesses. It’s very unclear to me what is and isn’t possible.
I certainly agree it isn’t clear, just my current best guess.