I think it’s reasonable to question how relevant this achievement is to our estimates of the speed of GAI, and I appreciate the specification of what evidence Robin thinks would be useful for estimating this speed—I just don’t see its relevance. ”The best evidence regarding the need for complexity in strong broad systems is the actual complexity observed in such systems.” There is an underlying question of how complex a system needs to be to exhibit GAI; a single example of evolved intelligence sets an upper limit, but is very weak evidence about the minimum. So my question for Robin is; what evidence can we look for about such a minimum?
(My suggestion: The success of “Narrow AI” at tasks like translation and writing seem like clear but weak evidence that many products of a moderately capable mind is achievable by relatively low complexity AI with large datasets. Robin: do you agree? If not, what short term successes or failures do you predict on the basis of your assumption that NAI successes aren’t leading to better GAI?)
I think it’s reasonable to question how relevant this achievement is to our estimates of the speed of GAI, and I appreciate the specification of what evidence Robin thinks would be useful for estimating this speed—I just don’t see its relevance.
”The best evidence regarding the need for complexity in strong broad systems is the actual complexity observed in such systems.”
There is an underlying question of how complex a system needs to be to exhibit GAI; a single example of evolved intelligence sets an upper limit, but is very weak evidence about the minimum. So my question for Robin is; what evidence can we look for about such a minimum?
(My suggestion: The success of “Narrow AI” at tasks like translation and writing seem like clear but weak evidence that many products of a moderately capable mind is achievable by relatively low complexity AI with large datasets. Robin: do you agree? If not, what short term successes or failures do you predict on the basis of your assumption that NAI successes aren’t leading to better GAI?)