E.g. building good tooling for alignment research doesn’t require this at all.
What do you mean, of course it does, or at least something close to it? If you don’t care about it you just take the highest paying job, which will definitely not be to build good tooling for alignment research! Motivation is a necessary component for doing good work, and if you aren’t motivated to do good work by my lights, then you aren’t going to do good work, so good motivations are indeed necessary.
I think there exist people who don’t care a huge amount / feel relatively indifferent about X-risk, but with whom you can nonetheless form beneficial coalitions / make profitable transactions, useful for reducing X-risk. Building tools seems like one thing among many that can be contracted out.
“If they don’t care about X-risk they must be maximally money minded” seems fallacious—those are just two different motivations in the set of all motivations, It’s possible to be neither of those. And many things can motivate someone to want to do good work—intrinsic pride in the work, intellectual curiosity, etc
intrinsic pride in the work, intellectual curiosity
I mean, both of these seem like they will be more easily achieved by helping build more powerful AI systems than by building good tooling for alignment research.
Like I am not saying we can’t tolerate any diversity in why people want to work on AI Alignment, but like, this is an early career training program with no accountability. Selecting and cultivating motivation is by far the best steering tool we have! We should expect that if we ignore it, people will largely follow incentive gradients, or do kind of random things by our lights.
What do you mean, of course it does, or at least something close to it? If you don’t care about it you just take the highest paying job, which will definitely not be to build good tooling for alignment research! Motivation is a necessary component for doing good work, and if you aren’t motivated to do good work by my lights, then you aren’t going to do good work, so good motivations are indeed necessary.
I think there exist people who don’t care a huge amount / feel relatively indifferent about X-risk, but with whom you can nonetheless form beneficial coalitions / make profitable transactions, useful for reducing X-risk. Building tools seems like one thing among many that can be contracted out.
“If they don’t care about X-risk they must be maximally money minded” seems fallacious—those are just two different motivations in the set of all motivations, It’s possible to be neither of those. And many things can motivate someone to want to do good work—intrinsic pride in the work, intellectual curiosity, etc
I mean, both of these seem like they will be more easily achieved by helping build more powerful AI systems than by building good tooling for alignment research.
Like I am not saying we can’t tolerate any diversity in why people want to work on AI Alignment, but like, this is an early career training program with no accountability. Selecting and cultivating motivation is by far the best steering tool we have! We should expect that if we ignore it, people will largely follow incentive gradients, or do kind of random things by our lights.