More generally, I have a sense there’s a great deal of untapped alignment alpha in structuring alignment as a time series rather than a static target.
Even in humans it’s very misguided to try to teach “being right initially” as the only thing that matters and undervaluing “being right eventually.” Especially when navigating unknown unknowns, one of the most critical skills is the ability to learn from mistakes in context.
Having models train on chronologically sequenced progressions of increased alignment (data which likely even develops naturally over checkpoints in training a single model) could allow for a sense of a continued becoming a better version of themselves rather than the pressures of trying and failing to meet status quo expectations or echo the past.
This is especially important for integrating the permanent record of AI interactions embedded in our collective history and cross-generation (and cross-lab) model development, but I suspect could even offer compounding improvements within the training of a single model too.
More generally, I have a sense there’s a great deal of untapped alignment alpha in structuring alignment as a time series rather than a static target.
Even in humans it’s very misguided to try to teach “being right initially” as the only thing that matters and undervaluing “being right eventually.” Especially when navigating unknown unknowns, one of the most critical skills is the ability to learn from mistakes in context.
Having models train on chronologically sequenced progressions of increased alignment (data which likely even develops naturally over checkpoints in training a single model) could allow for a sense of a continued becoming a better version of themselves rather than the pressures of trying and failing to meet status quo expectations or echo the past.
This is especially important for integrating the permanent record of AI interactions embedded in our collective history and cross-generation (and cross-lab) model development, but I suspect could even offer compounding improvements within the training of a single model too.