intrinsic pride in the work, intellectual curiosity
I mean, both of these seem like they will be more easily achieved by helping build more powerful AI systems than by building good tooling for alignment research.
Like I am not saying we can’t tolerate any diversity in why people want to work on AI Alignment, but like, this is an early career training program with no accountability. Selecting and cultivating motivation is by far the best steering tool we have! We should expect that if we ignore it, people will largely follow incentive gradients, or do kind of random things by our lights.
I mean, both of these seem like they will be more easily achieved by helping build more powerful AI systems than by building good tooling for alignment research.
Like I am not saying we can’t tolerate any diversity in why people want to work on AI Alignment, but like, this is an early career training program with no accountability. Selecting and cultivating motivation is by far the best steering tool we have! We should expect that if we ignore it, people will largely follow incentive gradients, or do kind of random things by our lights.