From my perspective, the interesting parts are “getting computers to think and do stuff” and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they’re intellectually substantially different from other fields with a similar profile. (I’m not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can’t fathom.)
i mean like writing kernels or hill climbing training metrics is viscerally fun even separate from any of the status parts. i know because long before any of this ai safety stuff, before ai was such a big deal, i would do ML stuff literally purely for fun without getting paid or trying to achieve glorious results or even publishing it anywhere for anyone else to see.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff. What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.
From my perspective, the interesting parts are “getting computers to think and do stuff” and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they’re intellectually substantially different from other fields with a similar profile. (I’m not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can’t fathom.)
i mean like writing kernels or hill climbing training metrics is viscerally fun even separate from any of the status parts. i know because long before any of this ai safety stuff, before ai was such a big deal, i would do ML stuff literally purely for fun without getting paid or trying to achieve glorious results or even publishing it anywhere for anyone else to see.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff.
What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.