Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.
But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time—Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one …. whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.
So the short answer to your question is: the opaqueness, at least, will not survive.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
OK, but does anything survive? How about the idea that
Some systems will be opaque to human programmers
...they will also be opaque to themselves
..which will stymie recursive self-improvement.
Well, here is my thinking.
Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.
But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time—Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.
But here’s the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.
Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one …. whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.
So the short answer to your question is: the opaqueness, at least, will not survive.
Where can I read about this?