I’d probably allow something like the synthetic data generation used for AlphaGeometry (Fig. 3) except in base ZFC and giving away very little human math inside the deduction engine
IIUC yeah, that definitely seems fair; I’d probably also allow various other substantial “quasi-mathematical meta-ideas” to seep in, e.g. other tricks for self-generating a curriculum of training data.
But I wouldn’t be surprised if like >20% of the people on LW who think A[G/S]I happens in like 2-3 years thought that my thing could totally happen in 2025 if the labs were aiming for it (though they might not expect the labs to aim for it), with your things plausibly happening later
Mhm, that seems quite plausible, yeah, and that does make me want to use your thing as a go-to example.
whether such a system would prove Cantor’s theorem (stated in base ZFC) (imo this would still be pretty crazy to see)?
This one I feel a lot less confident of, though I could plausibly get more confident if I thought about the proof in more detail.
Part of the spirit here, for me, is something like: Yes, AIs will do very impressive things on “highly algebraic” problems / parts of problems. (See “Algebraicness”.) One of the harder things for AIs is, poetically speaking, “self-constructing its life-world”, or in other words “coming up with lots of concepts to understand the material it’s dealing with, and then transitioning so that the material it’s dealing with is those new concepts, and so on”. For any given math problem, I could be mistaken about how algebraic it is (or, how much of its difficulty for humans is due to the algebraic parts), and how much conceptual progress you have to do to get to a point where the remaining work is just algebraic. I assume that human math is a big mix of algebraic and non-algebraic stuff. So I get really surprised when an AlphaMath can reinvent most of the definitions that we use, but I’m a lot less sure about a smaller subset because I’m less sure if it just has a surprisingly small non-algebraic part. (I think that someone with a lot more sense of the math in general, and formal proofs in particular, could plausibly call this stuff in advance significantly better than just my pretty weak “it’s hard to do all of a wide variety of problems”.)
IIUC yeah, that definitely seems fair; I’d probably also allow various other substantial “quasi-mathematical meta-ideas” to seep in, e.g. other tricks for self-generating a curriculum of training data.
Mhm, that seems quite plausible, yeah, and that does make me want to use your thing as a go-to example.
This one I feel a lot less confident of, though I could plausibly get more confident if I thought about the proof in more detail.
Part of the spirit here, for me, is something like: Yes, AIs will do very impressive things on “highly algebraic” problems / parts of problems. (See “Algebraicness”.) One of the harder things for AIs is, poetically speaking, “self-constructing its life-world”, or in other words “coming up with lots of concepts to understand the material it’s dealing with, and then transitioning so that the material it’s dealing with is those new concepts, and so on”. For any given math problem, I could be mistaken about how algebraic it is (or, how much of its difficulty for humans is due to the algebraic parts), and how much conceptual progress you have to do to get to a point where the remaining work is just algebraic. I assume that human math is a big mix of algebraic and non-algebraic stuff. So I get really surprised when an AlphaMath can reinvent most of the definitions that we use, but I’m a lot less sure about a smaller subset because I’m less sure if it just has a surprisingly small non-algebraic part. (I think that someone with a lot more sense of the math in general, and formal proofs in particular, could plausibly call this stuff in advance significantly better than just my pretty weak “it’s hard to do all of a wide variety of problems”.)