Underelicitation assumes a “maximum elicitation” rather than a never-ending series of more and more layers of elicitation that could be discovered.
You’ve undoubtedly spent much more time thinking about this than I have, but I’m worried that attempts to maximise elicitation merely accelerate capabilities without actually substantially boosting safety.
Chris_Leong comments on AI #90: The Wall