(I had a bit of an epistemic rollercoaster making this prediction, I updated “by the time someone makes an actually worthwhile Math AI, even if lean was an important part of it’s training process, it’s probably not that hard to do additional fine tuning that gets it to output stuff in a more standard mathy format. But, then, it seemed like it was still going to be important to quickly check it wasn’t blatantly broken as part of the process)
yeah, it’s less that I’d bet it works now, just, whenever it DOES start working, it seems likely it’d be through this mechanism.
⚖ If Thane Ruthenis thinks there are AI tools that can meaningfully help with Math by this point, did they first have a noticeable period (> 1 month) where it was easier to get work out of them via working in lean-or-similar? (Raymond Arnold: 25% & 60%)
(I had a bit of an epistemic rollercoaster making this prediction, I updated “by the time someone makes an actually worthwhile Math AI, even if lean was an important part of it’s training process, it’s probably not that hard to do additional fine tuning that gets it to output stuff in a more standard mathy format. But, then, it seemed like it was still going to be important to quickly check it wasn’t blatantly broken as part of the process)