Not saying you’re wrong or that your experience is invalid but this does not match my experience trying to do applied ML work or ML research. (More experienced people feel free to chime in and tell me how wrong I am...) Granted, I’m not that experienced, but my experience so far has been that most ideas I’ve had are either ideas that people had and tried or actually published on.
Further, my sense is that multiple experienced researchers often have all had similar ideas and the one who succeeds is the one who overcomes some key obstacle that others failed to overcome previously, so it’s not just that I’m new/bad.
I do think this mostly applies to areas that ML researchers think are interesting and somewhat tractable. So, for example, I wouldnt make this claim about safety research and don’t know how much it applies to gesture recognition in particular.
That said, since you’re claiming your tools are better, I will note that the ML community does seem open to switching tools in general, as evidenced by the Theano to Tensorflow / PyTorch (somewhat) shifts of the past few years.
Would you be willing to describe at least at a high level what these tools let you do?
Edit: I’m more skeptical of the object-level claims than the title claim. Assuming you’re using the classical definition of efficient market, I agree that “ML tooling” adoption doesn’t follow efficient market dynamics in at least one respect.
“ML tooling” adoption doesn’t follow efficient market dynamics in at least one respect.
This is exactly what I mean.
I think that the ML community is open to switching certain kinds of tools (including the examples you listed) but that other kinds of tools are so far off the community’s radar that data scientists aren’t even aware of their value. This is hard to explain without getting into specifics and I’m not ready to talk about the details yet.
Not saying you’re wrong or that your experience is invalid but this does not match my experience trying to do applied ML work or ML research. (More experienced people feel free to chime in and tell me how wrong I am...) Granted, I’m not that experienced, but my experience so far has been that most ideas I’ve had are either ideas that people had and tried or actually published on.
Further, my sense is that multiple experienced researchers often have all had similar ideas and the one who succeeds is the one who overcomes some key obstacle that others failed to overcome previously, so it’s not just that I’m new/bad.
I do think this mostly applies to areas that ML researchers think are interesting and somewhat tractable. So, for example, I wouldnt make this claim about safety research and don’t know how much it applies to gesture recognition in particular.
That said, since you’re claiming your tools are better, I will note that the ML community does seem open to switching tools in general, as evidenced by the Theano to Tensorflow / PyTorch (somewhat) shifts of the past few years.
Would you be willing to describe at least at a high level what these tools let you do?
Edit: I’m more skeptical of the object-level claims than the title claim. Assuming you’re using the classical definition of efficient market, I agree that “ML tooling” adoption doesn’t follow efficient market dynamics in at least one respect.
This is exactly what I mean.
I think that the ML community is open to switching certain kinds of tools (including the examples you listed) but that other kinds of tools are so far off the community’s radar that data scientists aren’t even aware of their value. This is hard to explain without getting into specifics and I’m not ready to talk about the details yet.
Got it—I agree discussing further probably doesn’t make sense without a concrete thing to talk about.