This makes sense, but it seems to be a fundamental difficulty of the alignment problem itself as opposed to the ability of any particular system to solve it. If the language model is superintelligent and knows everything we know, I would expect it to be able to evaluate its own alignment research as well as if not better than us. The problem is that it can’t get any feedback about whether its ideas actually work from empirical reality given the issues with testing alignment problems, not that it can’t get feedback from another intelligent grader/assessor reasoning in a ~a priori way.
This makes sense, but it seems to be a fundamental difficulty of the alignment problem itself as opposed to the ability of any particular system to solve it. If the language model is superintelligent and knows everything we know, I would expect it to be able to evaluate its own alignment research as well as if not better than us. The problem is that it can’t get any feedback about whether its ideas actually work from empirical reality given the issues with testing alignment problems, not that it can’t get feedback from another intelligent grader/assessor reasoning in a ~a priori way.