The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.
But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.
Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.
But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time—Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one …. whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.
So the short answer to your question is: the opaqueness, at least, will not survive.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
Well, the critical point is whether NN are currently on a track to AGI. If they are not, then one cannot extrapolate anything. Compare: steam engine technology is also not going to eventually become AGI, so how would it look if someone wrote about the characteristics of steam engine technology and tried to predict the future of AGI based on those characteristics?
My own research (which started with NN, but tried to find ways to get it to be useful for AGI) is already well beyond the point where the statements you make about NN are of any relevance. Never mind what will be happening in 5, 10 or 20 years.
It looks like you are on track to hard takeoff, but from other domains I know that people tend to overestimate their achievements 10-100 times, so I have to be a little bit sceptical. NN is much closer to AGI than steam engines anyway.
I agree that NN will eventually evolve in something else and this will end NN age, which may last in my opinion 10-20 years, but may be as short as 5 years. After NN age will end, most of these assumption should be revisited, but now situation looks like we live in such age.
The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.
But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.
Which renders your entire extrapolation moot.
OK, but does anything survive? How about the idea that
Some systems will be opaque to human programmers
...they will also be opaque to themselves
..which will stymie recursive self-improvement.
Well, here is my thinking.
Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.
But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time—Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one …. whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.
So the short answer to your question is: the opaqueness, at least, will not survive.
Where can I read about this?
Well, the critical point is whether NN are currently on a track to AGI. If they are not, then one cannot extrapolate anything. Compare: steam engine technology is also not going to eventually become AGI, so how would it look if someone wrote about the characteristics of steam engine technology and tried to predict the future of AGI based on those characteristics?
My own research (which started with NN, but tried to find ways to get it to be useful for AGI) is already well beyond the point where the statements you make about NN are of any relevance. Never mind what will be happening in 5, 10 or 20 years.
It looks like you are on track to hard takeoff, but from other domains I know that people tend to overestimate their achievements 10-100 times, so I have to be a little bit sceptical. NN is much closer to AGI than steam engines anyway.
I agree that NN will eventually evolve in something else and this will end NN age, which may last in my opinion 10-20 years, but may be as short as 5 years. After NN age will end, most of these assumption should be revisited, but now situation looks like we live in such age.