If I had to take a gamble on what organization was best primed to cause the Singularity, I’d probably also pick Google, mainly because they seem to be gathering the world’s best machine learning researchers together. They’ve already hired Geoffrey Hinton from Toronto and Andrew Ng from Stanford. Both of these researchers are considered among the foremost minds in machine learning, and both have been working on Deep Neural Networks that have shown a lot of promise with the pattern recognition problems like object recognition and speech recognition. Last I heard Google managed to improve the performance of their speech recognition software by something like 20% by switching to such neural nets.
It’s my own opinion that machine learning is the key to AGI, because any general intelligence needs to be able to learn about things it hasn’t been programmed to know already. That adaptability, the ability to change parameters or code to adapt to new information is, I think an essential element that separates a mere Optimization Algorithm, from something capable of developing General Intelligence. Also, this is just a personal intuition, but I think that being able to reason effectively about the world requires being able to semantically represent things like objects and concepts, which is something that artificial neural networks can hypothetically do, while things like expert systems tend to just shuffle around symbols syntactically without really understanding what they are.
The bottom up approach is also likely to be easier to reach superhuman intelligence levels sooner than some top down approach to A.I., as all we have to do is scale up artificial neural networks that copy the human brain’s architecture, whereas it seems like a top down approach will have to come up with a lot of new math before they can really make progress there. But then, I’m a connectionist, so I’m kinda biased.
Perhaps one interesting thought is that if the first superintelligent A.I. is actually an artificial neural network, it’ll probably be more “human-like” or at least similar to an evolved intelligence, than if it was created by top-down A.I. Not saying that that gets rid of the Orthogonality Thesis, but it might mean that an artificial neural network based A.I. might be more likely to land in the part of the mindspace that humans tend to fall into, because of similar architectures of sentience. Maybe.
If I had to take a gamble on what organization was best primed to cause the Singularity, I’d probably also pick Google, mainly because they seem to be gathering the world’s best machine learning researchers together. They’ve already hired Geoffrey Hinton from Toronto and Andrew Ng from Stanford. Both of these researchers are considered among the foremost minds in machine learning, and both have been working on Deep Neural Networks that have shown a lot of promise with the pattern recognition problems like object recognition and speech recognition. Last I heard Google managed to improve the performance of their speech recognition software by something like 20% by switching to such neural nets.
It’s my own opinion that machine learning is the key to AGI, because any general intelligence needs to be able to learn about things it hasn’t been programmed to know already. That adaptability, the ability to change parameters or code to adapt to new information is, I think an essential element that separates a mere Optimization Algorithm, from something capable of developing General Intelligence. Also, this is just a personal intuition, but I think that being able to reason effectively about the world requires being able to semantically represent things like objects and concepts, which is something that artificial neural networks can hypothetically do, while things like expert systems tend to just shuffle around symbols syntactically without really understanding what they are.
The bottom up approach is also likely to be easier to reach superhuman intelligence levels sooner than some top down approach to A.I., as all we have to do is scale up artificial neural networks that copy the human brain’s architecture, whereas it seems like a top down approach will have to come up with a lot of new math before they can really make progress there. But then, I’m a connectionist, so I’m kinda biased.
Perhaps one interesting thought is that if the first superintelligent A.I. is actually an artificial neural network, it’ll probably be more “human-like” or at least similar to an evolved intelligence, than if it was created by top-down A.I. Not saying that that gets rid of the Orthogonality Thesis, but it might mean that an artificial neural network based A.I. might be more likely to land in the part of the mindspace that humans tend to fall into, because of similar architectures of sentience. Maybe.