“Humans will be able to keep up with AGI by using AGI’s advancements themselves.”
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it’s not clear that it would share the ‘secrets’ of these improvements with humans.
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I’m probably missing something.
Humans can’t alter their neural structure.
This strikes me as unlikely. It is possible to create a circuit diagram of a neural structure, and likewise possible to electrically stimulate a neuron. By extension, it should be possible to replace a neuron with a circuit that does something identical to that neuron. This circuit could most certainly be altered. This may not be the most straightforward way of doing things, but it does mean alteration of neural structure is possible.
People would be unable to understand the developments the AI makes, and therefore could not implement them.
The first part of this is probably true. I can’t understand how some programs I wrote a year ago work, but I can see what the program does. I can’t see why useful developments couldn’t be implemented, even if we didn’t understand how they worked.
People are very good at using tools like dictionaries, computer programs, and even other people to produce a more useful output than they could produce on their own. You can see this is true by looking at any intelligence test. How much better could someone do on a vocabulary test with access to a dictionary. Similarly, calculators break arithmetic tests and a computer with MatLab would allow for far superior performance on current math/logic tests. This is because the human-tool system can outperform the human alone. Two people working together can likewise outperform one person. So tools of intelligence ≤ humans can be implemented as extensions of the human mind. I don’t see any reason this rule would not hold for a tool of higher intelligence than the user.
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I’m probably missing something.
This isn’t directly related to engineering, but consider the narrow domain of medicine. You have human doctors, who go to medical school, see patients one at a time, and so on.
Then you have something like Doctor Watson, one of IBM’s goals for the technology they showcased in the Jeopardy match. By processing human speech and test data, it could diagnose diseases on comparable timescales as human doctors, but have the benefit of seeing every patient in the country / world. With access to that much data, it could gain experience far more quickly, and know what to look for to find rare cases. (Indeed, it would probably notice many connections and correlations current doctors miss.)
The algorithms Watson uses wouldn’t be useful for a human doctor- the things they learned in medical school would be more appropriate for them. The benefits Watson has- the ability to accrue experience at a far faster rate, and the ability to interact with its memory on a far more sophisticated level- aren’t really things humans can learn.
In creative fields, it seems likely that human/tool hybrids will outperform tools alone, and that’s the interesting case for intelligence explosion. (Algorithmic music generation seems to generally be paired with a human curator who chooses the more interesting bits to save.) Many fields are not creative fields, though.
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I’m probably missing something.
Humans can’t alter their neural structure. This strikes me as unlikely. It is possible to create a circuit diagram of a neural structure, and likewise possible to electrically stimulate a neuron. By extension, it should be possible to replace a neuron with a circuit that does something identical to that neuron. This circuit could most certainly be altered. This may not be the most straightforward way of doing things, but it does mean alteration of neural structure is possible.
People would be unable to understand the developments the AI makes, and therefore could not implement them. The first part of this is probably true. I can’t understand how some programs I wrote a year ago work, but I can see what the program does. I can’t see why useful developments couldn’t be implemented, even if we didn’t understand how they worked.
People are very good at using tools like dictionaries, computer programs, and even other people to produce a more useful output than they could produce on their own. You can see this is true by looking at any intelligence test. How much better could someone do on a vocabulary test with access to a dictionary. Similarly, calculators break arithmetic tests and a computer with MatLab would allow for far superior performance on current math/logic tests. This is because the human-tool system can outperform the human alone. Two people working together can likewise outperform one person. So tools of intelligence ≤ humans can be implemented as extensions of the human mind. I don’t see any reason this rule would not hold for a tool of higher intelligence than the user.
This isn’t directly related to engineering, but consider the narrow domain of medicine. You have human doctors, who go to medical school, see patients one at a time, and so on.
Then you have something like Doctor Watson, one of IBM’s goals for the technology they showcased in the Jeopardy match. By processing human speech and test data, it could diagnose diseases on comparable timescales as human doctors, but have the benefit of seeing every patient in the country / world. With access to that much data, it could gain experience far more quickly, and know what to look for to find rare cases. (Indeed, it would probably notice many connections and correlations current doctors miss.)
The algorithms Watson uses wouldn’t be useful for a human doctor- the things they learned in medical school would be more appropriate for them. The benefits Watson has- the ability to accrue experience at a far faster rate, and the ability to interact with its memory on a far more sophisticated level- aren’t really things humans can learn.
In creative fields, it seems likely that human/tool hybrids will outperform tools alone, and that’s the interesting case for intelligence explosion. (Algorithmic music generation seems to generally be paired with a human curator who chooses the more interesting bits to save.) Many fields are not creative fields, though.