I hate the term “Neural Network”, as do many serious people working in the field.
There are Perceptrons which were inspired by neurons but are quite different. There are other related techniques that optimize in various ways. There are real neurons which are very complex and rather arbitrary. And then there is the greatly simplified Integrate and Fire (IF) abstraction of a neuron, often with Hebbian learning added.
Perceptrons solve practical problems, but are not the answer to everything as some would have you believe. There are new and powerful kernal methods that can automatically condition data which extend perceptrons. There are many other algorithms such as learning hidden Markov models. IF neurons are used to try and understand brain functionality, but are not useful for solving real problems (far too computationally expensive for what they do).
Which one of these quite different technologies is being referred to as “Neural Network”?
The idea of wiring perceptrons back onto themselves with state is old. Perceptrons have been shown to be able to emulate just about any function, so yes, they would be Turing complete. Being able to learn meanginful weights for such “recurrent” networks is relatively recent (1990s?).
Reviewers wanted for New Book—When Computers Can Really Think.
The book aims at a general audience, and does not simply assume that an AGI can be built. It differs from others by considering how natural selection would ultimately shape a AGI’s motivations. It argues against the Orthogonality Principal, suggesting instead that there is ultimately only one super goal, namely the need to exist. It also contains a semi-technical overview of artificial intelligent technologies for the non-expert/student.
An overview can be found at
www.ComputersThink.com
Please let me know if you would be interested in reviewing a late draft. Any feedback would be most welcome. Anthony@berglas.org