imo a larger one is something like not rooting the foundations in “build your own models of the world so that you contain within you a stack trace of why you’re doing what you’re doing” + “be willing to be challenges and update major beliefs based on deep-in-the-weeds technical arguments, and do so from a highly truth-seeking stance which knows what it feels like to actually understand something not just have an opinion”.
Lack of this is fineish in global health, but in AI Safety generates a crop of people with only surface deferral flavor understanding of the issues, which is insufficient to orient in a much less straightforward technical domain.
imo a larger one is something like not rooting the foundations in “build your own models of the world so that you contain within you a stack trace of why you’re doing what you’re doing” + “be willing to be challenges and update major beliefs based on deep-in-the-weeds technical arguments, and do so from a highly truth-seeking stance which knows what it feels like to actually understand something not just have an opinion”.
Lack of this is fineish in global health, but in AI Safety generates a crop of people with only surface deferral flavor understanding of the issues, which is insufficient to orient in a much less straightforward technical domain.