Why do you think this, and on a related note why do you think AI’s without X will stop functioning/hit a ceiling (in the sense of what is the causal mechanism)?
Starting from my assumption that concept-free general intelligence is impossible, the implication is that there would be some minimal initial set of concepts required to be built-in for all AGIs.
This minimal set of concepts would imply some necessary cognitive biases/heuristics (because the very definition of a ‘concept’ implies a particular grouping or clustering of data – an initial ‘bias’), which in turn is equivalent to some necessary starting values (a ‘bias’ is in a sense, a type of value judgement).
The same set of heuristics/biases (values) involved in taking actions in the world would also be involved in managing (reorganizing) the internal representational system of the AIs. If the reorganization is not performed in a self-consistent fashion, the AIs stop functioning. Remember: we are talking about a closed loop here….the heuristics/biases used to reorganize the representational system, have to themselves be fully represented in that system.
Therefore, the causal mechanism that stops the uAIs would be the eventual breakdown in their representational systems as the need for ever more new concepts arises, stemming from the inconsistent and/or incomplete initial heuristics/biases being used to manage those representational systems (i.e., failing to maintain a closed loop).
Why do you think this, and on a related note why do you think AI’s without X will stop functioning/hit a ceiling (in the sense of what is the causal mechanism)?
Taking a wild guess I’d say…
Starting from my assumption that concept-free general intelligence is impossible, the implication is that there would be some minimal initial set of concepts required to be built-in for all AGIs.
This minimal set of concepts would imply some necessary cognitive biases/heuristics (because the very definition of a ‘concept’ implies a particular grouping or clustering of data – an initial ‘bias’), which in turn is equivalent to some necessary starting values (a ‘bias’ is in a sense, a type of value judgement).
The same set of heuristics/biases (values) involved in taking actions in the world would also be involved in managing (reorganizing) the internal representational system of the AIs. If the reorganization is not performed in a self-consistent fashion, the AIs stop functioning. Remember: we are talking about a closed loop here….the heuristics/biases used to reorganize the representational system, have to themselves be fully represented in that system.
Therefore, the causal mechanism that stops the uAIs would be the eventual breakdown in their representational systems as the need for ever more new concepts arises, stemming from the inconsistent and/or incomplete initial heuristics/biases being used to manage those representational systems (i.e., failing to maintain a closed loop).
Advanced hard math for all this to follow….