i’ve noticed a life hyperparameter that affects learning quite substantially. i’d summarize it as “willingness to gloss over things that you’re confused about when learning something”. as an example, suppose you’re modifying some code and it seems to work but also you see a warning from an unrelated part of the code that you didn’t expect. you could either try to understand exactly why it happened, or just sort of ignore it.
reasons to set it low:
each time your world model is confused, that’s an opportunity to get a little bit of signal to improve your world model. if you ignore these signals you increase the length of your feedback loop, and make it take longer to recover from incorrect models of the world.
in some domains, it’s very common for unexpected results to actually be a hint at a much bigger problem. for example, many bugs in ML experiments cause results that are only slightly weird, but if you tug on the thread of understanding why your results are slightly weird, this can cause lots of your experiments to unravel. and doing so earlier rather than later can save a huge amount of time
understanding things at least one level of abstraction down often lets you do things more effectively. otherwise, you have to constantly maintain a bunch of uncertainty about what will happen when you do any particular thing, and have a harder time thinking of creative solutions
reasons to set it high:
it’s easy to waste a lot of time trying to understand relatively minor things, instead of understanding the big picture. often, it’s more important to 80-20 by understanding the big picture, and you can fill in the details when it becomes important to do so (which often is only necessary in rare cases).
in some domains, we have no fucking idea why anything happens, so you have to be able to accept that we don’t know why things happen to be able to make progress
often, if e.g you don’t quite get a claim that a paper is making, you could resolve your confusion just by reading a bit ahead. if you always try to fully understand everything before digging into it, you’ll find it very easy to get stuck before actually make it to the main point the paper is making
there are very different optimal configurations for different kinds of domains. maybe the right approach is to be aware that this is an important hparameter and occasionally try going down some rabbit holes and seeing how much value it provides
This seems to be related to Goldfish Reading. Or maybe complementary. In Goldfish Reading one reads the same text multiple times, not trying to understand it all at once or remember everything, i.e., intentionally ignoring confusion. But in a structured form to avoid overload.
Yeah, this seems like a good idea for reading—lets you get best of both worlds. Though it works for reading mostly because it doesn’t take that much longer to do so. This doesn’t translate as directly to e.g what to do when debugging code or running experiments.
I think it’s very important to keep track of what you don’t know. It can be useful to not try to get the best model when that’s not the bottleneck. But I think it’s always useful to explicitly store the knowledge of what models are developed to what extent.
The algorithm that I have been using, where what to understand to what extend is not a hyperparameter, is to just solve the actual problems I want to solve, and then always slightly overdo the learning, i.e. I would always learn a bit more than necessary to solve whatever subproblem I am solving right now. E.g. I am just trying to make a simple server, and then I learn about the protocol stack.
This has the advantage that I am always highly motivated to learn something, as the path to the problem on the graph of justifications is always pretty short. It also ensures that all the things that I learn are not completely unrelated to the problem I am solving.
I am pretty sure if you had perfect control over your motivation this is not the best algorithm, but given that you don’t, this is the best algorithm I have found so far.
i’ve noticed a life hyperparameter that affects learning quite substantially. i’d summarize it as “willingness to gloss over things that you’re confused about when learning something”. as an example, suppose you’re modifying some code and it seems to work but also you see a warning from an unrelated part of the code that you didn’t expect. you could either try to understand exactly why it happened, or just sort of ignore it.
reasons to set it low:
each time your world model is confused, that’s an opportunity to get a little bit of signal to improve your world model. if you ignore these signals you increase the length of your feedback loop, and make it take longer to recover from incorrect models of the world.
in some domains, it’s very common for unexpected results to actually be a hint at a much bigger problem. for example, many bugs in ML experiments cause results that are only slightly weird, but if you tug on the thread of understanding why your results are slightly weird, this can cause lots of your experiments to unravel. and doing so earlier rather than later can save a huge amount of time
understanding things at least one level of abstraction down often lets you do things more effectively. otherwise, you have to constantly maintain a bunch of uncertainty about what will happen when you do any particular thing, and have a harder time thinking of creative solutions
reasons to set it high:
it’s easy to waste a lot of time trying to understand relatively minor things, instead of understanding the big picture. often, it’s more important to 80-20 by understanding the big picture, and you can fill in the details when it becomes important to do so (which often is only necessary in rare cases).
in some domains, we have no fucking idea why anything happens, so you have to be able to accept that we don’t know why things happen to be able to make progress
often, if e.g you don’t quite get a claim that a paper is making, you could resolve your confusion just by reading a bit ahead. if you always try to fully understand everything before digging into it, you’ll find it very easy to get stuck before actually make it to the main point the paper is making
there are very different optimal configurations for different kinds of domains. maybe the right approach is to be aware that this is an important hparameter and occasionally try going down some rabbit holes and seeing how much value it provides
This seems to be related to Goldfish Reading. Or maybe complementary. In Goldfish Reading one reads the same text multiple times, not trying to understand it all at once or remember everything, i.e., intentionally ignoring confusion. But in a structured form to avoid overload.
Yeah, this seems like a good idea for reading—lets you get best of both worlds. Though it works for reading mostly because it doesn’t take that much longer to do so. This doesn’t translate as directly to e.g what to do when debugging code or running experiments.
I think it’s very important to keep track of what you don’t know. It can be useful to not try to get the best model when that’s not the bottleneck. But I think it’s always useful to explicitly store the knowledge of what models are developed to what extent.
The algorithm that I have been using, where what to understand to what extend is not a hyperparameter, is to just solve the actual problems I want to solve, and then always slightly overdo the learning, i.e. I would always learn a bit more than necessary to solve whatever subproblem I am solving right now. E.g. I am just trying to make a simple server, and then I learn about the protocol stack.
This has the advantage that I am always highly motivated to learn something, as the path to the problem on the graph of justifications is always pretty short. It also ensures that all the things that I learn are not completely unrelated to the problem I am solving.
I am pretty sure if you had perfect control over your motivation this is not the best algorithm, but given that you don’t, this is the best algorithm I have found so far.