Questions are tools to help answerers optimize utility

Epistemic & Scholarly Status: Fairly quickly written. I’m sure there’s better writing out there on the topic somewhere, but I haven’t found it so far. I have some confidence in the main point, but the terminology around it makes it difficult to be concrete.

TLDR
The very asking of a question presupposes multiple assumptions that break down when the answerer is capable enough. Questions stop making sense once a questioner has sufficient trust in the answerer. After some threshold, the answerer will instead be trusted to directly reach out whenever is appropriate. I think this insight can help draw light on a few dilemmas.


I’ve been doing some reflection on what it means to answer a question well.

Questions often are poorly specified or chosen. A keen answerer should not only give an answer, but often provide a better question. But how far can this go? If the answerer could be more useful by ignoring the question altogether, should they? Perhaps there is some fundamental reason why we should desire answerers to act as oracles instead of more general information feeders.

My impression is that situations where we have incredibly intelligent agents doing nothing but answer questions are artificial and contrived. Below I attempt to clarify this.

Let’s define some terminology:

Asker: The agent asking the question.

Answerer: The agent answering the question. It could be the same as the asker, but probably later in time. Agent here just means “entity”, not agent vs. tool agent.

Asked question: The original question that the asker asks.

Enlightened question: The question that the asker should have asked, if they were to have had more information and insight. This obviously changes depending on exactly how much more information and insight they have.

Ideal answer: The best attempt to directly answer a question. This could either be the asked question or an enlightened question. Answer quality is evaluated for how well it answers the question, not how well it helps the asker.

Ideal response: The best response the answerer could provide to the asker. This is not the same as the ideal answer. Response quality is evaluated for how it helps the answer, not how well it answers the question.

Utility: A representation of one’s preferences. Utility function, not utilitarianism.

Examples

Question: What’s the best way to arrive at my dentist appointment today?

The answer to the stated question could be,

Take Route 83 at 6:30pm

The answer to an enlightened question could be,

Your dentist is sick, so your appointment will definitely be cancelled

A good response, knowing the question, but not answering it, might be,

It doesn’t really matter what route you should take, but now that I know that you’re concerned about the trip, I can tell you that the best way for you to save time today would be by ordering in food from Sophia Gartener at 3:30. It will arrive at 5.

A good response, ignoring the question (or correctly not updating based on it), and optimizing for utility, might be,

There’s a possible power outage happening in the next few days. I suggest borrowing a generator sometime tomorrow. I’ve left instructions for how to do so in an email.

The puzzle with the later answers is that they seem like poor answers, although they are helpful responses. The obvious solution here is to flag that this is a very artificial scenario. In a more realistic case, the last response would have been given before the question was asked. The asker would learn to trust that the answerer would tell them everything useful before they even realized they needed to know it. They would likely either stop asking questions, or ask very different sorts of questions.

The act of asking a question implies (it almost presupposes) an information asymmetry. The asker assumes that the answerer doesn’t have or hasn’t drawn attention to some information. If the answerer actually does have this information (i.e. they can intuit what is valuable to the asker and when), then it wouldn’t make sense to ask the question. This is an instance of the maxim of relevance.

So, questions make sense only until the answerers get good enough. This is a really high bar. Being “good enough” would likely require a tremendous amount of prediction power and deep human understanding. The answerer would have to be much more intelligent in the given area than the asker for this to work.

Breakdown

If we were to imagine a breakdown of information conveyed in the above question, we could then identify a more precise and empathetic response from a very smart being.

You’ve asked me how to get to your dentist appointment. This reveals to me the following information:
1. You are unsure about how to get to a dentist appointment.
2. You believe that the expected information value you can get to optimize your route is more valuable than the cost of asking the question.
3. You expect that I either couldn’t predict that such information would have been valuable to you without you asking it, or I wouldn’t have told you unless asked.
4. You do not expect me to have much more valuable information I could relay to you at this time.

Human, I believe you have dramatically underestimated my abilities. I believe that you are severely incorrect about points 3 and 4. You have much to learn on how to interact with me.

Students and Professors

Another analogy is that of students and professors. Many students don’t ask any questions to their professors, particularly in large classes. They expect the professors will lead them through all of the important information. They expect that the professors are more informed about which information is important.

In many situations the asker is the one trying to be useful to the answerer, instead of it being the other way around. For example, the professor could ask the students questions to narrow in on what information might be most useful to them. I imagine that as the hypothetical empathetic professor improves along a few particular axes, they will be asked fewer questions, and ask more questions. In this later case, the questions are mainly a form of elicitation to learn about the answerer.

Corrigibility

There could well be situations where answerers assume that they could better respond with a non-answer, but the askers would prefer otherwise. This becomes an issue of corrigibility. Here there could be a clear conflict between the two. I imagine these issues will represent a minority of the future use of such system, but these instances could be particularly important. This is a big rabbit hole and has been deeply discussed in the corrigibility and similar posts, so I’ll leave it out for this post.

Takeaways

I think that:

  • Answerers should generally try to figure out enlightened questions and answer those questions. This method is often the one that will be best for the asker’s utility.

  • If it is the case that answers can better help the askers by ignoring the question and instead doing something else, that’s better. They should try to give the ideal response, not the ideal answer.

  • However, in almost all cases now, the best response to attempt is the ideal answer. This is true just because the askers often have some key information not accessible to the answerers. Often, when askers ask questions, they believe there’s likely sufficient benefit for these particular questions to be answered, so humble answerers should often trust them.

  • Once ideal responses are very different from ideal answers, then people will stop asking questions. Questions primarily serve the function of helping responses to be more useful, so if that no longer holds, questions will be no longer valuable.

Correspondingly, I imagine that as AGI gets close, people might ask fewer and fewer questions; instead, relevant information will be better pushed to them. A really powerful oracle wouldn’t stay an oracle for long, they would quickly get turned into an information feed of some kind.


Thanks to Rohin Shah for discussion and comments on this piece