Holding that beliefs are for true things means that you do not believe things because they areuseful,believe things because theysound nice, or believe things because youprefer them to be true. You believe things thataretrue (or at least that you believe to be true, which is often the best we can get!).

This is maybe a subtle objection, but I disagree with the implicit rejection of utility in favor of truth being set up here. Truth is very attractive to us, and I think this runs deep for reasons that don’t much matter here but on which I’ll just say I think it’s because we’re fundamentally prediction error minimizers (with some homeostatic feedback loops thrown in for survival and reproduction purposes). But if I had to justify why truth is important, I would say it’s because it’s useful. If truth were somehow not causally upstream of making accurate predictions about the world (or maybe that’s just what truth means), I don’t think I would care about it, because making accurate predictions about the world is really useful to getting all the other things I care about done.

Yes, there is a danger that befalls some people when they prize utility too far above truth that biases them in subtle and gross ways that lead them astray and actually work against them by making them less serve their purposes when they’re not looking, but there are similar dangers when people pursue truth at the expense of usefulness, mostly in the form of opportunity costs. I think we all at some point must learn to prize truth over motivated reasoning and preferences, for example, but I also think we must learn to prize the utility of truth over truth itself lest we be enthralled by the Beast of Scrupulosity.

I’m somewhat sympathetic to this. You probably don’t need the ability, prior to working on AI safety, to already be familiar with a wide variety of mathematics used in ML, by MIRI, etc.. To be specific, I wouldn’t be much concerned if you didn’t know category theory, more than basic linear algebra, how to solve differential equations, how to integrate together probability distributions, or even multivariate calculus prior to starting on AI safety work, but I would be concerned if you didn’t have deep experience with writing mathematical proofs beyond high school geometry (although I hear these days they teach geometry differently than I learned it—by re-deriving everything in Elements), say the kind of experience you would get from studying graduate level algebra, topology, measure theory, combinatorics, etc..

This might also be a bit of motivated reasoning on my part, to reflect Dagon’s comments, since I’ve not gone back to study category theory since I didn’t learn it in school and I haven’t had specific need for it, but my experience has been that having solid foundations in mathematical reasoning and proof writing is what’s most valuable. The rest can, as you say, be learned lazily, since your needs will become apparent and you’ll have enough mathematical fluency to find and pursue those fields of mathematics you may discover you need to know.