But there’s something else, which is a very finite legible learning algorithm that can automatically find all those things
Is there? I see a.lot of talk about brain algorithms here, but I have never seen one stated...made “legible”.
—the object-level stuff and the thinking strategies at all levels. The genome builds such an algorithm into the human brain
Does it? Rationalists like to applaud such claims, but I have never seen the proof.
And it seems to work!
Does it? Even If we could answer every question we have ever posed, we could still have fundamental limitations. If you did have a fundamental cognitive deficit, that prevents you from.understanding some specific X how would you know? You need to be able to conceive X before conceiving that you don’t understand X. It would be like the visual blind spot...which you cannot see!
And then I’m guessing your response would be something like: there isn’t just one optimal “legible learning algorithm” as distinct from the stuff that it’s supposed to be learning. And if so, sure
So why bring it up?
there isn’t just one optimal “legible learning algorithm”
Optimality—doing things efficiency—isn’t the issue, the issue is not being able to do certain things at all.
I think this is very related to the idea in Bayesian rationality that priors don’t really matter once you make enough observations.
The idea is wrong. Hypotheses matter , because if you haven’t formulated the right hypothesis , no amount of data will confirm it. Only worrying about weighting of priors is playing in easy mode, because it assumes the hypothesis space is covered. Fundamental cognitive limitations could manifest as the inability to form certain hypotheses. How many hypotheses can a chimp form? You could show a chimp all the evidence in the world, and it’s not going to hypothesize general relativity.
Rationalists always want to reply that Solomonoff inductors avoid the problem on the basis that SIs consider “every” “hypothesis”… but they don’t , several times over. It’s not just that they are uncomputable, it’s also that it’s not know that every hypothesis can be expressed as a programme. The ability to range over a complete space does not equate to the ability to range over Everything.
Here’s an example: If you’ve seen a pattern “A then B then C” recur 10 times in a row, you will start unconsciously expecting AB to be followed by C. But “should” you expect AB to be followed by C after seeing ABC only 2 times? Or what if you’ve seen the pattern ABC recur 72 times in a row, but then saw AB(not C) twice? What “should” a learning algorithm expect in those cases? You can imagine a continuous family of learning algorithms, that operate on the same underlying principles.
A set of underlying principles is a limitation. SIs are limited to computability and the prediction of a sequence of observations. You’re writing as that something like prediction of the next observation is the only problem of interest , but we don’t know that Everything fits that pattern. The fact that Bayes and Solomomoff work that way is of no help, as shown above.
But within this range, I acknowledge that it’s true that some of them will be able to learn different object-level areas of math a bit faster or slower, in a complicated way, for example.
But you haven’t shown that efficiency differences are the only problem. The nonexistence of fundamental no-go areas certainly doesn’t follow from the existence of.efficiency differences.
, it can still figure things out with superhuman speed and competence across the board
The definition of superintelligence means that “across the board” is the range of things humans do, so if there is something humans can’t do at all,an ASI is not definitionally required to be able to do it.
By the same token, nobody ever found the truly optimal hyperparameters for AlphaZero, if those even exist, but AlphaZero was still radically superhuman
The existence of superhuman performance in some areas doesn’t prove adequate performance in all areas, so it is basically irrelevant to the original question, the existence of fundamental limitations in humans.
OP discusses maths from a realist perspective. If you approach it as a human construction, the problem about maths is considerably weakened...but the wider problem remains, because we don’t know that maths is Everything.
this is conflating the reason for why one knows/believes P versus the reason for why P,
Of course, that only makes sense assuming realism.
You are understating your own case, because there is a difference between mere infinity and All Kinds of Everything. An infinite collection of one kind of thing can be relatively tractable.
Is there? I see a.lot of talk about brain algorithms here, but I have never seen one stated...made “legible”.
Does it? Rationalists like to applaud such claims, but I have never seen the proof.
Does it? Even If we could answer every question we have ever posed, we could still have fundamental limitations. If you did have a fundamental cognitive deficit, that prevents you from.understanding some specific X how would you know? You need to be able to conceive X before conceiving that you don’t understand X. It would be like the visual blind spot...which you cannot see!
So why bring it up?
Optimality—doing things efficiency—isn’t the issue, the issue is not being able to do certain things at all.
The idea is wrong. Hypotheses matter , because if you haven’t formulated the right hypothesis , no amount of data will confirm it. Only worrying about weighting of priors is playing in easy mode, because it assumes the hypothesis space is covered. Fundamental cognitive limitations could manifest as the inability to form certain hypotheses. How many hypotheses can a chimp form? You could show a chimp all the evidence in the world, and it’s not going to hypothesize general relativity.
Rationalists always want to reply that Solomonoff inductors avoid the problem on the basis that SIs consider “every” “hypothesis”… but they don’t , several times over. It’s not just that they are uncomputable, it’s also that it’s not know that every hypothesis can be expressed as a programme. The ability to range over a complete space does not equate to the ability to range over Everything.
A set of underlying principles is a limitation. SIs are limited to computability and the prediction of a sequence of observations. You’re writing as that something like prediction of the next observation is the only problem of interest , but we don’t know that Everything fits that pattern. The fact that Bayes and Solomomoff work that way is of no help, as shown above.
But you haven’t shown that efficiency differences are the only problem. The nonexistence of fundamental no-go areas certainly doesn’t follow from the existence of.efficiency differences.
The definition of superintelligence means that “across the board” is the range of things humans do, so if there is something humans can’t do at all,an ASI is not definitionally required to be able to do it.
The existence of superhuman performance in some areas doesn’t prove adequate performance in all areas, so it is basically irrelevant to the original question, the existence of fundamental limitations in humans.
@Mateusz Bagiński
OP discusses maths from a realist perspective. If you approach it as a human construction, the problem about maths is considerably weakened...but the wider problem remains, because we don’t know that maths is Everything.
Of course, that only makes sense assuming realism.
@Kaarel
You are understating your own case, because there is a difference between mere infinity and All Kinds of Everything. An infinite collection of one kind of thing can be relatively tractable.