Turing completeness is definitely the wrong metric for determining whether a method is a path to AGI. My learning algorithm of “generate a random Turing machine, test it on the data, and keep it if it does the best job of all the other Turing machines I’ve generated, repeat” is clearly Turing complete, and will eventually learn any computable process, but it’s very inefficient, and we shouldn’t expect AGI to be generated using that algorithm anytime in the near future.
Similarly, neural networks with one hidden layer are universal function approximators, and yet modern methods use very deep neural networks with lots of internal structure (convolutions, recurrences) because they learn faster, even though a single hidden layer is enough in theory to achieve the same tasks.
I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.
2nd part of that wasn’t explicit for me before your answer so thank you :)
Turing completeness is definitely the wrong metric for determining whether a method is a path to AGI. My learning algorithm of “generate a random Turing machine, test it on the data, and keep it if it does the best job of all the other Turing machines I’ve generated, repeat” is clearly Turing complete, and will eventually learn any computable process, but it’s very inefficient, and we shouldn’t expect AGI to be generated using that algorithm anytime in the near future.
Similarly, neural networks with one hidden layer are universal function approximators, and yet modern methods use very deep neural networks with lots of internal structure (convolutions, recurrences) because they learn faster, even though a single hidden layer is enough in theory to achieve the same tasks.
I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.
2nd part of that wasn’t explicit for me before your answer so thank you :)