The human algorithm is a very general problem-solving algorithm. If we can solve problems with an algorithm that looks very similar to the human algorithm, at a similar level of competence to the human algorithm, then this is evidence about the generalizability of our algorithm, since we should expect algorithms that are very similar to the human algorithm to also have similar generality.
You’ll have to decompose what you mean by “algorithmic similarity”; if I presented you a universal learning algorithm that looked very different from the human universal learning algorithm, would you say that the two algorithms were simiar?
What makes two algorithms similar?
I think algorithm equivalence is unnecessary. We do not want algorithmic equivalence , what we actually want is functional equivalence. If the human universal learning algorithm was random, and implementing it did not produce a human engine of cognition, then you would not want to implement the human universal learning algorithm. Algorithmic equivalence is a subordinate desire to functional equivalence; it is desired only in so much as it produces functional equivalence.
The true desire is functional equivalence (which is itself subordinate to Human Level Machine Intelligence).
If we have functional equivalence we do not need to criticise algorithmic equivalence, and if we do not have functional equivalence we do not need to criticise algorithmic equivalence and algorithmic equivalence is not what we actually desire; it is a subordinate desire to functional equivalence.
If modern ML was pursuing algorithmic equivalence is a route to functional equivalence, then—and only then—woud it make sense to criticise a lack of algorithmic equivalence. However, it has not been my impression that this is the path AI is taking.
Machine natural Language Processing is not algorithmically equivalent to human nature language processing.
Computer vision is not algorithmically equivalent to human vision.
Computer speech recognition is not algorithmically equivalent to human speech recognition.
AlphaGo is not algorithmically equivalent to human Go.
I can not recall a single significant concrete machine learning achievement in a specific problem domain made by gaining algorithmic equivalence with the human engine of cognition. If you can name 3 examples, I would update in favour of algorithmic equivalence.
Neural networks are human inspired, but the actual algorithms those neural networks run are not algorithmically equivalent to the algorithms we run.
I do think algorithmic similarity matters:
The human algorithm is a very general problem-solving algorithm. If we can solve problems with an algorithm that looks very similar to the human algorithm, at a similar level of competence to the human algorithm, then this is evidence about the generalizability of our algorithm, since we should expect algorithms that are very similar to the human algorithm to also have similar generality.
You’ll have to decompose what you mean by “algorithmic similarity”; if I presented you a universal learning algorithm that looked very different from the human universal learning algorithm, would you say that the two algorithms were simiar?
What makes two algorithms similar?
I think algorithm equivalence is unnecessary. We do not want algorithmic equivalence , what we actually want is functional equivalence. If the human universal learning algorithm was random, and implementing it did not produce a human engine of cognition, then you would not want to implement the human universal learning algorithm. Algorithmic equivalence is a subordinate desire to functional equivalence; it is desired only in so much as it produces functional equivalence.
The true desire is functional equivalence (which is itself subordinate to Human Level Machine Intelligence).
If we have functional equivalence we do not need to criticise algorithmic equivalence, and if we do not have functional equivalence we do not need to criticise algorithmic equivalence and algorithmic equivalence is not what we actually desire; it is a subordinate desire to functional equivalence.
If modern ML was pursuing algorithmic equivalence is a route to functional equivalence, then—and only then—woud it make sense to criticise a lack of algorithmic equivalence. However, it has not been my impression that this is the path AI is taking.
Machine natural Language Processing is not algorithmically equivalent to human nature language processing.
Computer vision is not algorithmically equivalent to human vision.
Computer speech recognition is not algorithmically equivalent to human speech recognition.
AlphaGo is not algorithmically equivalent to human Go.
I can not recall a single significant concrete machine learning achievement in a specific problem domain made by gaining algorithmic equivalence with the human engine of cognition. If you can name 3 examples, I would update in favour of algorithmic equivalence.
Neural networks are human inspired, but the actual algorithms those neural networks run are not algorithmically equivalent to the algorithms we run.