Welcome to LessWrong! Feel free to introduce yourself in the welcome thread.
That is a very good summary and review for those who want want to brush up on dynamic programming—it gives several example problems and cost functions to be minimized, and shows how the optimal substructure fits in.
I do have to say that the bit for the tradeoff between overfitting and accuracy is not terribly useful for those trying to understand such things. It is a cookbook method, with no justification for why these particular error weightings are terribly useful.
EDIT: Of course, almost any regularization will help compared to nothing, and it does show a nice way to do this with dynamic programming, which can greatly speed things up over naive implementations.
For anyone interested, here is a decent algorithm for getting the “correct” number of lines in your linear regression.
http://www.cs.princeton.edu/~wayne/kleinberg-tardos/06dynamic-programming-2x2.pdf
Pages 5 and 6.
Welcome to LessWrong! Feel free to introduce yourself in the welcome thread.
That is a very good summary and review for those who want want to brush up on dynamic programming—it gives several example problems and cost functions to be minimized, and shows how the optimal substructure fits in.
I do have to say that the bit for the tradeoff between overfitting and accuracy is not terribly useful for those trying to understand such things. It is a cookbook method, with no justification for why these particular error weightings are terribly useful.
EDIT: Of course, almost any regularization will help compared to nothing, and it does show a nice way to do this with dynamic programming, which can greatly speed things up over naive implementations.
Ouch. Comic Sans.
Good cookbook, though.