reason? (I intuitively agree with you, just curious)
Here is one reason, but it’s up for debate:
Deep learning courses rush through logistic regression and usually just mention SVMs. Arguably it’s important for understanding deep learning to take the time to really, deeply understand how these linear models work, both theoretically and practically, both on synthetic data and on high dimensional real life data.
More generally, there are a lot of machine learning concepts that deep learning courses don’t have enough time to introduce properly, so they just mention them, and you might get a mistaken impression about their relative importance.
Another related thing: right now machine learning competitions are dominated by gradient boosting. Deep learning, not really. This says nothing about starting with deep learning or not, but a good argument against stopping at deep learning.
It depends on the competitions. All kaggle image-related competitions I have seen have been obliterated by deep neural networks.
I am a researcher, albeit a freshman one, and I completely disagree. Knowing about linear and logistic regressions is interesting because neural networks evolved from there, but it’s something you can watch a couple of videos on, maybe another one about maximum likelihood and you are done. Not sure why SVMs are that important.