One obvious source if you haven’t already read it is Nick Bostrom’s Superintelligence. Bostrom addresses many of the issues that you list, e.g. an AI rewriting its own software, why an AI is likely to be software (and Bostrom discusses one or two non-software scenarios as well), etc. This book is quite informative and well worth reading, IMO.
Some of your questions are more fundamental than what is covered in Superintelligence. Specifically, to understand why “alphabetical letters invented thousands of years ago to express human sounds” are adequate for any computing task, including AI, you should explore the field of theoretical computer science, specifically automata and language theory. A classic book in that field is Hopcroft and Ullman’s Introduction to Automata Theory, Languages and Computation (caution: don’t be fooled by the “cute” cover illustration; this book is tough to get through and assumes that the reader has a strong mathematics background). Also, you should consider reading books on the philosophy of mind – but I have not read enough in this area to make specific recommendations.
To explore the question of “why do we think software will have a source code as we know it?” you will need to understand the role of software, machine language, the relationship between source code and machine language, and the role of compilers and interpreters. All of this is covered in a typical computer science curriculum. If you have a software background but have not studied these topics formally, a classic book on compilers and machine translation is Aho, Sethi and Ullman’s Compilers, Principles, Techniques and Tools (the dragon book). The dragon book is quite good but has been around for quite a while; a CS professor or recent graduate may be able to recommend something newer.
An additional step would be to explore the current state-of-the-art of AI techniques – e.g. neural nets, Bayesian inference, etc. There are quite a few people on LW who can probably give you good recommendations in this area.
But current neural nets don’t have source code as we know it: the intelligence is coded very implicitly into the weighting, post training, and the source code explicitly specifies a net that doesn’t do anything.
It is true that much of the intelligence in a neural network is stored implicitly in the weights. The same (or similar) can be said about many other machine-learning techniques. However, I don’t think that anything I said above indicated otherwise.
One obvious source if you haven’t already read it is Nick Bostrom’s Superintelligence. Bostrom addresses many of the issues that you list, e.g. an AI rewriting its own software, why an AI is likely to be software (and Bostrom discusses one or two non-software scenarios as well), etc. This book is quite informative and well worth reading, IMO.
Some of your questions are more fundamental than what is covered in Superintelligence. Specifically, to understand why “alphabetical letters invented thousands of years ago to express human sounds” are adequate for any computing task, including AI, you should explore the field of theoretical computer science, specifically automata and language theory. A classic book in that field is Hopcroft and Ullman’s Introduction to Automata Theory, Languages and Computation (caution: don’t be fooled by the “cute” cover illustration; this book is tough to get through and assumes that the reader has a strong mathematics background). Also, you should consider reading books on the philosophy of mind – but I have not read enough in this area to make specific recommendations.
To explore the question of “why do we think software will have a source code as we know it?” you will need to understand the role of software, machine language, the relationship between source code and machine language, and the role of compilers and interpreters. All of this is covered in a typical computer science curriculum. If you have a software background but have not studied these topics formally, a classic book on compilers and machine translation is Aho, Sethi and Ullman’s Compilers, Principles, Techniques and Tools (the dragon book). The dragon book is quite good but has been around for quite a while; a CS professor or recent graduate may be able to recommend something newer.
An additional step would be to explore the current state-of-the-art of AI techniques – e.g. neural nets, Bayesian inference, etc. There are quite a few people on LW who can probably give you good recommendations in this area.
But current neural nets don’t have source code as we know it: the intelligence is coded very implicitly into the weighting, post training, and the source code explicitly specifies a net that doesn’t do anything.
It is true that much of the intelligence in a neural network is stored implicitly in the weights. The same (or similar) can be said about many other machine-learning techniques. However, I don’t think that anything I said above indicated otherwise.