Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.
Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.