For all inputs, A and B produce the same corresponding outputs.
A and B have the same domain and the same image.
Then it doesn’t matter if the algorithms of A and the algorithms of B are different, as far as I’m concerned, A and B are functionally equivalent.
@SarahConstantin I think the intricacies of human thought are irrelevant.
I don’t think we need machines to approximate human thought.
If you want machines to reach human level intelligence, then you need them to approximate human behaviour.
I don’t think the fact that machines don’t think the way we do is at all relevant to human level machine intelligence.
It seems like a non issue.
I think the difference in internal algorithmic process is largely irrelevant.
Having matching algorithmic processes is one way to get two systems have the same output for all input, but it’s not the only way.
For any function f, it is possible to construct an infinite number of functions f’_i that are functionally equivalent to f, but possess different algorithmic processes.
f’_0(x) = f(x) + 0.
f’_i{x} = f’_{i-1}(x) + i—i. (i > 0, i in N).
The cardinality of the set of such functions is the same as the cardinality of the set of natural numbers.
What we really want is functional equivalence.
We do not need algorithmic equivalence (where algorithmic equivalence is when the algorithms match exactly).
We care about the output, and not the processing.
Algorithmic equivalence is a proper subset of functional equivalence.
If your only criticism of B is that B is not algorithmically equivalent to A even though it is functionally equivalent to A, then I would say that you’re missing the point.
I think criticising algorithmic equivalence is missing the point. Unless it is not functionally equivalent, then you don’t need to criticise algorithmic equivalence.
If it is functionally equivalent, then you don’t need to criticise algorithmic equivalence and can directly criticise the lack of functional equivalence.
In short, there’s no need to criticise algorithmic equivalence?
Unless algorithmic equivalence is pursued s the path to functional equivalence.
I was not under the impression that algorithmic equivalence was what is necessarily being pursued.
(Admittedly NN are based on gaining algorithmically similar architectures).
Computer vision is not algorithmically equivalent to human vision.
I think the same is true for speech recognition as well.
I don’t think you can prove that algorithmic equivalence is necessary.
I think that’s a fucking insane thing to prove.
I mean mention 3 significant ML achievements that are algorithmically equivalent to their human counterparts.
The human algorithm is a very general problem-solving algorithm. If we can solve problems with an algorithm that looks very similar to the human algorithm, at a similar level of competence to the human algorithm, then this is evidence about the generalizability of our algorithm, since we should expect algorithms that are very similar to the human algorithm to also have similar generality.
You’ll have to decompose what you mean by “algorithmic similarity”; if I presented you a universal learning algorithm that looked very different from the human universal learning algorithm, would you say that the two algorithms were simiar?
What makes two algorithms similar?
I think algorithm equivalence is unnecessary. We do not want algorithmic equivalence , what we actually want is functional equivalence. If the human universal learning algorithm was random, and implementing it did not produce a human engine of cognition, then you would not want to implement the human universal learning algorithm. Algorithmic equivalence is a subordinate desire to functional equivalence; it is desired only in so much as it produces functional equivalence.
The true desire is functional equivalence (which is itself subordinate to Human Level Machine Intelligence).
If we have functional equivalence we do not need to criticise algorithmic equivalence, and if we do not have functional equivalence we do not need to criticise algorithmic equivalence and algorithmic equivalence is not what we actually desire; it is a subordinate desire to functional equivalence.
If modern ML was pursuing algorithmic equivalence is a route to functional equivalence, then—and only then—woud it make sense to criticise a lack of algorithmic equivalence. However, it has not been my impression that this is the path AI is taking.
Machine natural Language Processing is not algorithmically equivalent to human nature language processing.
Computer vision is not algorithmically equivalent to human vision.
Computer speech recognition is not algorithmically equivalent to human speech recognition.
AlphaGo is not algorithmically equivalent to human Go.
I can not recall a single significant concrete machine learning achievement in a specific problem domain made by gaining algorithmic equivalence with the human engine of cognition. If you can name 3 examples, I would update in favour of algorithmic equivalence.
Neural networks are human inspired, but the actual algorithms those neural networks run are not algorithmically equivalent to the algorithms we run.
I think thought is irrelevant.
If we have two systems A and B.
For all inputs, A and B produce the same corresponding outputs.
A and B have the same domain and the same image.
Then it doesn’t matter if the algorithms of A and the algorithms of B are different, as far as I’m concerned, A and B are functionally equivalent.
@SarahConstantin I think the intricacies of human thought are irrelevant.
I don’t think we need machines to approximate human thought.
If you want machines to reach human level intelligence, then you need them to approximate human behaviour.
I don’t think the fact that machines don’t think the way we do is at all relevant to human level machine intelligence.
It seems like a non issue.
I think the difference in internal algorithmic process is largely irrelevant.
Having matching algorithmic processes is one way to get two systems have the same output for all input, but it’s not the only way.
For any function f, it is possible to construct an infinite number of functions f’_i that are functionally equivalent to f, but possess different algorithmic processes.
f’_0(x) = f(x) + 0.
f’_i{x} = f’_{i-1}(x) + i—i. (i > 0, i in N).
The cardinality of the set of such functions is the same as the cardinality of the set of natural numbers.
What we really want is functional equivalence.
We do not need algorithmic equivalence (where algorithmic equivalence is when the algorithms match exactly).
We care about the output, and not the processing.
Algorithmic equivalence is a proper subset of functional equivalence.
If your only criticism of B is that B is not algorithmically equivalent to A even though it is functionally equivalent to A, then I would say that you’re missing the point.
I think criticising algorithmic equivalence is missing the point. Unless it is not functionally equivalent, then you don’t need to criticise algorithmic equivalence.
If it is functionally equivalent, then you don’t need to criticise algorithmic equivalence and can directly criticise the lack of functional equivalence.
In short, there’s no need to criticise algorithmic equivalence?
Unless algorithmic equivalence is pursued s the path to functional equivalence.
I was not under the impression that algorithmic equivalence was what is necessarily being pursued.
(Admittedly NN are based on gaining algorithmically similar architectures).
Computer vision is not algorithmically equivalent to human vision.
I think the same is true for speech recognition as well.
I don’t think you can prove that algorithmic equivalence is necessary.
I think that’s a fucking insane thing to prove.
I mean mention 3 significant ML achievements that are algorithmically equivalent to their human counterparts.
I do think algorithmic similarity matters:
The human algorithm is a very general problem-solving algorithm. If we can solve problems with an algorithm that looks very similar to the human algorithm, at a similar level of competence to the human algorithm, then this is evidence about the generalizability of our algorithm, since we should expect algorithms that are very similar to the human algorithm to also have similar generality.
You’ll have to decompose what you mean by “algorithmic similarity”; if I presented you a universal learning algorithm that looked very different from the human universal learning algorithm, would you say that the two algorithms were simiar?
What makes two algorithms similar?
I think algorithm equivalence is unnecessary. We do not want algorithmic equivalence , what we actually want is functional equivalence. If the human universal learning algorithm was random, and implementing it did not produce a human engine of cognition, then you would not want to implement the human universal learning algorithm. Algorithmic equivalence is a subordinate desire to functional equivalence; it is desired only in so much as it produces functional equivalence.
The true desire is functional equivalence (which is itself subordinate to Human Level Machine Intelligence).
If we have functional equivalence we do not need to criticise algorithmic equivalence, and if we do not have functional equivalence we do not need to criticise algorithmic equivalence and algorithmic equivalence is not what we actually desire; it is a subordinate desire to functional equivalence.
If modern ML was pursuing algorithmic equivalence is a route to functional equivalence, then—and only then—woud it make sense to criticise a lack of algorithmic equivalence. However, it has not been my impression that this is the path AI is taking.
Machine natural Language Processing is not algorithmically equivalent to human nature language processing.
Computer vision is not algorithmically equivalent to human vision.
Computer speech recognition is not algorithmically equivalent to human speech recognition.
AlphaGo is not algorithmically equivalent to human Go.
I can not recall a single significant concrete machine learning achievement in a specific problem domain made by gaining algorithmic equivalence with the human engine of cognition. If you can name 3 examples, I would update in favour of algorithmic equivalence.
Neural networks are human inspired, but the actual algorithms those neural networks run are not algorithmically equivalent to the algorithms we run.