Generally we are asking for an AI that doesn’t give an unambiguously bad answer, and if there’s any way of revealing the facts where we think a human would (defensibly) agree with the AI, then probably the answer isn’t unambiguously bad and we’re fine if the AI gives it.
There are lots of possible concerns with that perspective; probably the easiest way to engage with them is to consider some concrete case in which a human might make different judgments, but where it’s catastrophic for our AI not to make the “correct” judgment. I’m not sure what kind of example you have in mind and I have somewhat different responses to different kinds of examples.
For example, note that ELK is never trying to answer any questions of the form “how good is this outcome?”; I certainly agree that there can also be ambiguity about questions like “did the diamond stay in the room?” but it’s a fairly different situation. The most relevant sections are narrow elicitation and why it might be sufficient which gives a lot of examples of where we think we can/can’t tolerate ambiguity, and to a lesser extent avoiding subtle manipulation which explains how you might get a good outcome despite tolerating such ambiguity. That said, there are still lots of reasonable objections to both of those.
When you say “some case in which a human might make different judgments, but where it’s catastrophic for the AI not to make the correct judgment,” what I hear is “some case where humans would sometimes make catastrophic judgments.”
I think such cases exist and are a problem for the premise that some humans agreeing means an idea meets some standard of quality. Bumbling into such cases naturally might not be a dealbreaker, but there are some reasons we might get optimization pressure pushing plans proposed by an AI towards the limits of human judgment.
Generally we are asking for an AI that doesn’t give an unambiguously bad answer, and if there’s any way of revealing the facts where we think a human would (defensibly) agree with the AI, then probably the answer isn’t unambiguously bad and we’re fine if the AI gives it.
There are lots of possible concerns with that perspective; probably the easiest way to engage with them is to consider some concrete case in which a human might make different judgments, but where it’s catastrophic for our AI not to make the “correct” judgment. I’m not sure what kind of example you have in mind and I have somewhat different responses to different kinds of examples.
For example, note that ELK is never trying to answer any questions of the form “how good is this outcome?”; I certainly agree that there can also be ambiguity about questions like “did the diamond stay in the room?” but it’s a fairly different situation. The most relevant sections are narrow elicitation and why it might be sufficient which gives a lot of examples of where we think we can/can’t tolerate ambiguity, and to a lesser extent avoiding subtle manipulation which explains how you might get a good outcome despite tolerating such ambiguity. That said, there are still lots of reasonable objections to both of those.
When you say “some case in which a human might make different judgments, but where it’s catastrophic for the AI not to make the correct judgment,” what I hear is “some case where humans would sometimes make catastrophic judgments.”
I think such cases exist and are a problem for the premise that some humans agreeing means an idea meets some standard of quality. Bumbling into such cases naturally might not be a dealbreaker, but there are some reasons we might get optimization pressure pushing plans proposed by an AI towards the limits of human judgment.