(this question is self-downvoted to keep it at the bottom.)
develop methods for user’s-morality-focused alignment of the kind open weights AI users would still want for their decensored model
(this question is self-downvoted to keep it at the bottom.)
develop methods for user’s-morality-focused alignment of the kind open weights AI users would still want for their decensored model