Possibly tangential, but I have found that the “try it yourself before studying” method is a very effective way to learn about a problem/field. It also lends a gut-level insight which can be useful for original research later on, even if the original attempt doesn’t yield anything useful.
One example: my freshman year of college, I basically spent the whole month of winter break banging my head against 3-sat, trying to find an efficient algorithm to solve it and also just generally playing with the problem. I knew it was NP-complete, but hadn’t studied related topics in any significant depth. Obviously I did not find any efficient algorithm, but that month was probably the most valuable-per-unit-time I’ve spent in terms of understanding complexity theory. Afterwards, when I properly studied the original NP-completeness proof for 3-sat, reduction proofs, the polynomial hierarchy, etc, it was filled with moments of “oh yeah, I played with something like this, that’s a clever way to apply it”.
Better example: I’ve spent a huge amount of time building models of financial markets, over the years. At one point I noticed some structures had shown up in one model which looked an awful lot like utility functions, so I finally got around to properly studying Arrow & Debreu-style equilibrium models. Sure enough, I had derived most of it already. I even had some pieces which weren’t in the textbooks (pieces especially useful for financial markets). That also naturally lead to reading up on more advanced economic theory (e.g. recursive macro), which I doubt I would have understood nearly as well if I hadn’t been running into the same ideas in the wild already.
Possibly tangential, but I have found that the “try it yourself before studying” method is a very effective way to learn about a problem/field. It also lends a gut-level insight which can be useful for original research later on, even if the original attempt doesn’t yield anything useful.
One example: my freshman year of college, I basically spent the whole month of winter break banging my head against 3-sat, trying to find an efficient algorithm to solve it and also just generally playing with the problem. I knew it was NP-complete, but hadn’t studied related topics in any significant depth. Obviously I did not find any efficient algorithm, but that month was probably the most valuable-per-unit-time I’ve spent in terms of understanding complexity theory. Afterwards, when I properly studied the original NP-completeness proof for 3-sat, reduction proofs, the polynomial hierarchy, etc, it was filled with moments of “oh yeah, I played with something like this, that’s a clever way to apply it”.
Better example: I’ve spent a huge amount of time building models of financial markets, over the years. At one point I noticed some structures had shown up in one model which looked an awful lot like utility functions, so I finally got around to properly studying Arrow & Debreu-style equilibrium models. Sure enough, I had derived most of it already. I even had some pieces which weren’t in the textbooks (pieces especially useful for financial markets). That also naturally lead to reading up on more advanced economic theory (e.g. recursive macro), which I doubt I would have understood nearly as well if I hadn’t been running into the same ideas in the wild already.