Thank you! I have a feeling Sutton will succeed, without having to make too many huge architectural leaps—we already have steady progress in generalization, and extracting formulae which fit observations is getting better. It will probably be some embarrassing moment where a researcher says “well, what if we just try it like this?”
And, with that ‘generalized-concept-extractor’ in hand… we’ll find that we get better performance with the narrow AI that was AutoML’d into being in a few minutes. AGI research will grind to a halt as soon as it succeeds.
not sure I agree, but I love the post. what are your thoughts on this paper? https://www.semanticscholar.org/paper/The-Alberta-Plan-for-AI-Research-Sutton-Bowling/f3829d2f1de5c735c7767322bf742746dc682d4b
Thank you! I have a feeling Sutton will succeed, without having to make too many huge architectural leaps—we already have steady progress in generalization, and extracting formulae which fit observations is getting better. It will probably be some embarrassing moment where a researcher says “well, what if we just try it like this?”
And, with that ‘generalized-concept-extractor’ in hand… we’ll find that we get better performance with the narrow AI that was AutoML’d into being in a few minutes. AGI research will grind to a halt as soon as it succeeds.