My actual claim is that learning becomes not just harder linearly, or quadratically (e.g. when you have to spend, say, an extra hour on learning the same amount of new material compared to what you used to need), but exponentially (e.g. when you have to spend, say, twice as much time/effort as before on learning the same amount of new material).
This is an interesting claim, but I’m not sure if it matches my own subjective experience. Though I also haven’t dug deeply into math so maybe it’s more true there—it seems to me like this could vary by field, where some fields are more “broad” while others are “deep”.
And looking around, e.g. this page suggests that at least four different kinds of formulas are used for modeling learning speed, apparently depending on the domain. The first one is the “diminishing returns” curve, which sounds similar to your model:
But it also includes graphs such as the s-curve (where initial progress is slow but then you have a breakthrough that lets you pick up more faster, until you reach a plateau) and the complex curve (with several plateaus and breakthroughs).
Why do I see this apparent phenomenon of inefficient learning important? One example I see somewhat often is someone saying “I believe in the AI alignment research and I want to contribute directly, and while I am not that great at math, I can put in the effort and get good.” Sadly, that is not the case. Because learning is asymptotically inefficient, you will run out of time, money and patience long before you get to the level where you can understand, let alone do the relevant research: it’s not the matter of learning 10 times harder, it’s the matter of having to take longer than the age of the universe, because your personal exponent eventually gets that much steeper than that of someone with a natural aptitude to math.
This claim seems to run counter to the occasionally-encountered claim that people who are too talented at math may actually be outperformed by less talented students at one point, as the people who were too talented hit their wall later and haven’t picked up the patience and skills for dealing with it, whereas the less talented ones will be used to slow but steady progress by then. E.g. Terry Tao mentions it:
Of course, even if one dismisses the notion of genius, it is still the case that at any given point in time, some mathematicians are faster, more experienced, more knowledgeable, more efficient, more careful, or more creative than others. This does not imply, though, that only the “best” mathematicians should do mathematics; this is the common error of mistaking absolute advantage for comparative advantage. The number of interesting mathematical research areas and problems to work on is vast – far more than can be covered in detail just by the “best” mathematicians, and sometimes the set of tools or ideas that you have will find something that other good mathematicians have overlooked, especially given that even the greatest mathematicians still have weaknesses in some aspects of mathematical research. As long as you have education, interest, and a reasonable amount of talent, there will be some part of mathematics where you can make a solid and useful contribution. It might not be the most glamorous part of mathematics, but actually this tends to be a healthy thing; in many cases the mundane nuts-and-bolts of a subject turn out to actually be more important than any fancy applications. Also, it is necessary to “cut one’s teeth” on the non-glamorous parts of a field before one really has any chance at all to tackle the famous problems in the area; take a look at the early publications of any of today’s great mathematicians to see what I mean by this.
In some cases, an abundance of raw talent may end up (somewhat perversely) to actually be harmful for one’s long-term mathematical development; if solutions to problems come too easily, for instance, one may not put as much energy into working hard, asking dumb questions, or increasing one’s range, and thus may eventually cause one’s skills to stagnate. Also, if one is accustomed to easy success, one may not develop the patience necessary to deal with truly difficult problems (see also this talk by Peter Norvig for an analogous phenomenon in software engineering). Talent is important, of course; but how one develops and nurtures it is even more so.
This is an interesting claim, but I’m not sure if it matches my own subjective experience. Though I also haven’t dug deeply into math so maybe it’s more true there—it seems to me like this could vary by field, where some fields are more “broad” while others are “deep”.
My experience with math as an entire field is the opposite—it gets easier to learn more to more I know. I think the big question is how you draw the box. If you draw it just around probability theory or something, the claim seems more plausible (though I still doubt it).
This is an interesting claim, but I’m not sure if it matches my own subjective experience. Though I also haven’t dug deeply into math so maybe it’s more true there—it seems to me like this could vary by field, where some fields are more “broad” while others are “deep”.
And looking around, e.g. this page suggests that at least four different kinds of formulas are used for modeling learning speed, apparently depending on the domain. The first one is the “diminishing returns” curve, which sounds similar to your model:
But it also includes graphs such as the s-curve (where initial progress is slow but then you have a breakthrough that lets you pick up more faster, until you reach a plateau) and the complex curve (with several plateaus and breakthroughs).
This claim seems to run counter to the occasionally-encountered claim that people who are too talented at math may actually be outperformed by less talented students at one point, as the people who were too talented hit their wall later and haven’t picked up the patience and skills for dealing with it, whereas the less talented ones will be used to slow but steady progress by then. E.g. Terry Tao mentions it:
My experience with math as an entire field is the opposite—it gets easier to learn more to more I know. I think the big question is how you draw the box. If you draw it just around probability theory or something, the claim seems more plausible (though I still doubt it).