But what I wanted to convey is that it feels like I’m supposed to learn something which is manifestly inferior, in its logical foundation, than what is already known and available.
I think it’s very useful to listen to be able to listen to someone with domain expertise telling you when you are wrong when you are a beginner.
But then I’m allowed to ask “why?”, and if the answer is “because I say so”, then I feel pretty confident to dismiss the expert.
But that’s not even the stage I’m at. A book is not an interactive medium, so the act has gone like this:
book: Cross-validation!
me: “Gaaaak! That sounds like totally wrong! Is there anyone that can explain me either why this is right or, if it’s actually wrong, what is the correct approach?”
Also, although in this case there seems to be an available answer, I don’t think it makes sense to always expect that. Sometimes people find a technique that tends to work in practice and then only later come up with a theoretical explanation of why it works. If you happen to live in the period in between...
He! I’ve suddenly remembered that LW was founded exactly because the fields of AI and ML used too much frequentist (il)logic. The Sequence was about to restore sanity in the field. Anyway, the textbook you mentioned seems pretty cool, thank you very much!
I’m no expert at machine learning. However as far as I remember the point of doing cross-validation is to find out whether your model is robust.
Robustness is not a standard “Bayesian” concept. Maybe you don’t appreciate it’s value?
I would appreciate if there was en explanation of why something is done the way it is. Instead it’s all about learning the passwords. Maybe it’s just that the main textbook in the field is pedagogically bad, it wouldn’t be the first time.
Getting deep understanding of a complex field like machine intelligence isn’t easy. You shouldn’t expect it to be easy and something that you can acquire in a few days.
This is probably very arrogant of me to say, but my advice would be: “Listen to the domain expert when he tells you what you should do… and then find a Bayesian and let them explain to you why that works.”
In my defense, this was my personal experience with statistics at school. I was very good at math in general, but statistics somehow didn’t “click”. I always had this feeling as if what was explained was built on some implicit assumptions that no one ever mentioned explicitly, so unlike with the rest of the math, I had no other choice here but to memorize that in a situation x you should do y, because, uhm, that’s what my teachers told me to do. -- More than ten years later, I read LW, and here I am told that yes, the statistics that I was taught does have implicit assumptions, and suddenly it all makes sense. And it makes me very angry that no one told me this stuff at school. -- I am a “deep learner” (this, not this), and I have problem learning something when I am told how, but I can’t find out why. Most people probably don’t have a problem with this, they are told how, and they do, and can be quite successful with it; and probably later they will also get an idea of why. But I need to understand the stuff from the very beginning, otherwise I can’t do it well. Telling me to trust a domain expert does not help; I may put a big confidence in how, but I still don’t know why.
ChristianKI is not telling you to trust a domain expert, but rather to read / listen to the domain expert long enough to understand what they are saying (rather than instantly assuming they are wrong because they say something that seems to conflict with your preconceived notions).
I think if you were to read most machine learning books, you would get quite a lot of “why”. See this manuscript for instance. I don’t really see why you think that Bayesians have a monopoly on being able to explain things.
I think you make a mistake if you put a school teacher who doesn’t understand statistics on a deep level into the same category of academic machine learning experts who don’t happen to be “Bayesians”.
I think it’s very useful to listen to be able to listen to someone with domain expertise telling you when you are wrong when you are a beginner.
But then I’m allowed to ask “why?”, and if the answer is “because I say so”, then I feel pretty confident to dismiss the expert.
But that’s not even the stage I’m at. A book is not an interactive medium, so the act has gone like this:
book: Cross-validation!
me: “Gaaaak! That sounds like totally wrong! Is there anyone that can explain me either why this is right or, if it’s actually wrong, what is the correct approach?”
I’m still searching for an answer...
Try this paper or page 403 of this textbook.
Also, although in this case there seems to be an available answer, I don’t think it makes sense to always expect that. Sometimes people find a technique that tends to work in practice and then only later come up with a theoretical explanation of why it works. If you happen to live in the period in between...
He! I’ve suddenly remembered that LW was founded exactly because the fields of AI and ML used too much frequentist (il)logic. The Sequence was about to restore sanity in the field.
Anyway, the textbook you mentioned seems pretty cool, thank you very much!
I’m no expert at machine learning. However as far as I remember the point of doing cross-validation is to find out whether your model is robust. Robustness is not a standard “Bayesian” concept. Maybe you don’t appreciate it’s value?
I would appreciate if there was en explanation of why something is done the way it is. Instead it’s all about learning the passwords. Maybe it’s just that the main textbook in the field is pedagogically bad, it wouldn’t be the first time.
Getting deep understanding of a complex field like machine intelligence isn’t easy. You shouldn’t expect it to be easy and something that you can acquire in a few days.
This is probably very arrogant of me to say, but my advice would be: “Listen to the domain expert when he tells you what you should do… and then find a Bayesian and let them explain to you why that works.”
In my defense, this was my personal experience with statistics at school. I was very good at math in general, but statistics somehow didn’t “click”. I always had this feeling as if what was explained was built on some implicit assumptions that no one ever mentioned explicitly, so unlike with the rest of the math, I had no other choice here but to memorize that in a situation x you should do y, because, uhm, that’s what my teachers told me to do. -- More than ten years later, I read LW, and here I am told that yes, the statistics that I was taught does have implicit assumptions, and suddenly it all makes sense. And it makes me very angry that no one told me this stuff at school. -- I am a “deep learner” (this, not this), and I have problem learning something when I am told how, but I can’t find out why. Most people probably don’t have a problem with this, they are told how, and they do, and can be quite successful with it; and probably later they will also get an idea of why. But I need to understand the stuff from the very beginning, otherwise I can’t do it well. Telling me to trust a domain expert does not help; I may put a big confidence in how, but I still don’t know why.
ChristianKI is not telling you to trust a domain expert, but rather to read / listen to the domain expert long enough to understand what they are saying (rather than instantly assuming they are wrong because they say something that seems to conflict with your preconceived notions).
I think if you were to read most machine learning books, you would get quite a lot of “why”. See this manuscript for instance. I don’t really see why you think that Bayesians have a monopoly on being able to explain things.
I think you make a mistake if you put a school teacher who doesn’t understand statistics on a deep level into the same category of academic machine learning experts who don’t happen to be “Bayesians”.