My expectation is that if you do the Alignment Game Tree exercise and maybe a few others like it relatively early, and generally study what seems useful from there, and update along the way as you learn more stuff, you’ll end up reasonably-differentiated from other researchers by default. On the other hand, if you find yourself literally only studying ML, then that would be a clear sign that you should diversify more (and also I would guess that’s an indicator that you haven’t gone very deep into the Game Tree).
My expectation is that if you do the Alignment Game Tree exercise and maybe a few others like it relatively early, and generally study what seems useful from there, and update along the way as you learn more stuff, you’ll end up reasonably-differentiated from other researchers by default. On the other hand, if you find yourself literally only studying ML, then that would be a clear sign that you should diversify more (and also I would guess that’s an indicator that you haven’t gone very deep into the Game Tree).