There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.
It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it’s like saying that just because a mechanic completely understands how a car works, it doesn’t mean that he build another car which is fundamentally superior.
Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)
There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.
It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it’s like saying that just because a mechanic completely understands how a car works, it doesn’t mean that he build another car which is fundamentally superior.
Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)