Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff. What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff.
What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.