I think this lacks justification why the entire approach is a good idea. Improving mathematical accuracy in LLMs seems like a net negative to me for the same reason that generic capability improvements are a net negative.
I think this lacks justification why the entire approach is a good idea. Improving mathematical accuracy in LLMs seems like a net negative to me for the same reason that generic capability improvements are a net negative.