I think a better model is meaning (or self-actualization). There’s some meaning to be found in being a tragic hero racing to build AGI “”“safely””” who is killed by an unfair universe. Much less to be found in an unsuccessful policy advocate who tried and failed to get because it was politically intractable, which was obvious to everyone from the start.
I think most of the people involved like working with the smartest and most competent people alive today, on the hardest problems, in order to build a new general intelligence for the first time since the dawn of humanity, in exchange for massive amounts of money, prestige, fame, and power. This is what I refer to by ‘glory’.
I personally find that the technical problems in capabilities are usually more appealing to me than the ones in math purely in terms of funness. they are simply different kinds of problems that appeal to different people.
From my perspective, the interesting parts are “getting computers to think and do stuff” and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they’re intellectually substantially different from other fields with a similar profile. (I’m not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can’t fathom.)
i mean like writing kernels or hill climbing training metrics is viscerally fun even separate from any of the status parts. i know because long before any of this ai safety stuff, before ai was such a big deal, i would do ML stuff literally purely for fun without getting paid or trying to achieve glorious results or even publishing it anywhere for anyone else to see.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff. What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.
Aren’t these basically mostly “works on capabilities because of status + power”?
(E.g. if you only care about challenging technical problems, you’ll just go do math)
I think of it as ‘glory’.
I think a better model is meaning (or self-actualization). There’s some meaning to be found in being a tragic hero racing to build AGI “”“safely””” who is killed by an unfair universe. Much less to be found in an unsuccessful policy advocate who tried and failed to get because it was politically intractable, which was obvious to everyone from the start.
I think most of the people involved like working with the smartest and most competent people alive today, on the hardest problems, in order to build a new general intelligence for the first time since the dawn of humanity, in exchange for massive amounts of money, prestige, fame, and power. This is what I refer to by ‘glory’.
I personally find that the technical problems in capabilities are usually more appealing to me than the ones in math purely in terms of funness. they are simply different kinds of problems that appeal to different people.
From my perspective, the interesting parts are “getting computers to think and do stuff” and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they’re intellectually substantially different from other fields with a similar profile. (I’m not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can’t fathom.)
i mean like writing kernels or hill climbing training metrics is viscerally fun even separate from any of the status parts. i know because long before any of this ai safety stuff, before ai was such a big deal, i would do ML stuff literally purely for fun without getting paid or trying to achieve glorious results or even publishing it anywhere for anyone else to see.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff.
What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.