I would say not many at all! They might be working on something they call AGI but I think we need a change in view point before we can start making progress towards the important general aspect of it.
I think the closest people are the transfer learning people. They are at least trying something different. I think we need to solve the resource allocation problem first, then we can layer ML/language inside it. Nothing is truly general, general intelligences can devote resources to solving different problems at different time, and get knowledge of solving problems from other general intelligences.
I’ve got an article brewing on the incentives for people not to work on AGI.
Company incentives:
They are making plenty of money with normal AI/dumb computers no need to go fancy.
It is hard to monetise in the way companies are used to. No need of an upgrade cycle, the system maintains and upgrades itself. No expensive training required either, it trains itself to understand the users. Sell a person an AGI never sell them software again vs SaaS.
For internal software companies optimise for simple software that they people can understand and get many people to maintain. There is a high activation energy required to go from simple software that people maintain to a complex system that can maintain itself.
Legal minefield. Who has responsibility for an AGIs actions? The company or the user? This is solved if it can be sold as intelligence augmentation and is sold in a very raw state with little knowledge and is trained/given more responsibility by the user.
Programmer incentives:
Programmers don’t want to program themselves out of a job.
Programmers also optimize for simple things that they can maintain/understand.
I’m guessing if ever it stops being easy to make money as a software company, then the other incentives might get overridden.
The only real reason to make AGI is if you want to take over the world (or solve other big problems). And if you want, you will not put on your web page—if you are serious. So we will almost never see credible claims on work on AGI, and especially on self-improving superintelligence.
Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.
I would say not many at all! They might be working on something they call AGI but I think we need a change in view point before we can start making progress towards the important general aspect of it.
I think the closest people are the transfer learning people. They are at least trying something different. I think we need to solve the resource allocation problem first, then we can layer ML/language inside it. Nothing is truly general, general intelligences can devote resources to solving different problems at different time, and get knowledge of solving problems from other general intelligences.
Sometimes I think that there are fewer people who explicitly works on universal AGI than people who works on AI safety.
I’ve got an article brewing on the incentives for people not to work on AGI.
Company incentives:
They are making plenty of money with normal AI/dumb computers no need to go fancy.
It is hard to monetise in the way companies are used to. No need of an upgrade cycle, the system maintains and upgrades itself. No expensive training required either, it trains itself to understand the users. Sell a person an AGI never sell them software again vs SaaS.
For internal software companies optimise for simple software that they people can understand and get many people to maintain. There is a high activation energy required to go from simple software that people maintain to a complex system that can maintain itself.
Legal minefield. Who has responsibility for an AGIs actions? The company or the user? This is solved if it can be sold as intelligence augmentation and is sold in a very raw state with little knowledge and is trained/given more responsibility by the user.
Programmer incentives:
Programmers don’t want to program themselves out of a job.
Programmers also optimize for simple things that they can maintain/understand.
I’m guessing if ever it stops being easy to make money as a software company, then the other incentives might get overridden.
The only real reason to make AGI is if you want to take over the world (or solve other big problems). And if you want, you will not put on your web page—if you are serious. So we will almost never see credible claims on work on AGI, and especially on self-improving superintelligence.
Exception: Schmidhuber
Exception: Goertzel and just about every founder of the AI field who work on AI mainly as a way of understanding thought and building things like us.
Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.