Well, I have no knowledge of the Sylow theorems, but it seems likely that if any system can efficiently generate short, ingenious, idea-based proofs, it must have something analogous to a mathematician’s understanding.
Or at least have a mechanism for formulating and relating concepts, which to me (admittedly a layman), sounds like the main challenge for AGI.
I suppose the key scenario I can imagine right now where an AI is able to conduct arbitrary human-level mathematical reasoning, but can not be called a fully general intelligence, is if there is some major persistent difficulty in transferring the ability to reason about the purely conceptual world of mathematics to the domain of the physically real world one is embedded within. In particular (inspired by Eliezer’s comments on AIXI), I could imagine there being difficulty with the problem of locating oneself within that world and distinguishing oneself from the surroundings.
Ha, I’ve seen that quote before, good point! (and likewise in the linked comment) I suppose one reason to think that mathematical reasoning is close to AGI is that it seems similar to programming. And if an AI can program, that seems significant.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
If so, that might lend support for the argument that automated general mathematical reasoning is still a ways off from AGI.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
The mathematical counterpart may of the “recognizing important concepts and asking good questions” variety. A friend of my has an idea of how to formalize the notion of an “important concept” in a mathematical field, and possible relevance to AI, but at the moment it’s all very vague speculation :-).
Well, I have no knowledge of the Sylow theorems, but it seems likely that if any system can efficiently generate short, ingenious, idea-based proofs, it must have something analogous to a mathematician’s understanding.
Or at least have a mechanism for formulating and relating concepts, which to me (admittedly a layman), sounds like the main challenge for AGI.
I suppose the key scenario I can imagine right now where an AI is able to conduct arbitrary human-level mathematical reasoning, but can not be called a fully general intelligence, is if there is some major persistent difficulty in transferring the ability to reason about the purely conceptual world of mathematics to the domain of the physically real world one is embedded within. In particular (inspired by Eliezer’s comments on AIXI), I could imagine there being difficulty with the problem of locating oneself within that world and distinguishing oneself from the surroundings.
“If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.”—John von Neumann.
See also the quotation in this comment.
Ha, I’ve seen that quote before, good point! (and likewise in the linked comment) I suppose one reason to think that mathematical reasoning is close to AGI is that it seems similar to programming. And if an AI can program, that seems significant.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
If so, that might lend support for the argument that automated general mathematical reasoning is still a ways off from AGI.
The mathematical counterpart may of the “recognizing important concepts and asking good questions” variety. A friend of my has an idea of how to formalize the notion of an “important concept” in a mathematical field, and possible relevance to AI, but at the moment it’s all very vague speculation :-).
It’s pretty easy for a human of significantly above average intelligence. That doesn’t imply easy for an average human or an AI.