A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I’m not that pessimistic.)
If this means “until the theory and practice of machine learning is better developed, if you try to build an AGI using existing tools you will very probably fail” it’s not unusually pessimistic at all. “An investment of $X in developing AI theory will do more to reduce the mean time to AI than $X on AGI projects using existing theory now” isn’t so outlandish either. What was the context/cite?
I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).
If this means “until the theory and practice of machine learning is better developed, if you try to build an AGI using existing tools you will very probably fail” it’s not unusually pessimistic at all. “An investment of $X in developing AI theory will do more to reduce the mean time to AI than $X on AGI projects using existing theory now” isn’t so outlandish either. What was the context/cite?
I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Not general technological progress surely, but the theory and tools developed by working on particular machine learning problems and methodologies?
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).