For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
Ah, I see! My immediate instinct is to say “okay, design a narrow AI to play the role of a teacher” but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that’s really not the point lol.
There is something to be said for interactivity though. In my experience, the best explanations I’ve seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively.
Also, something like a market for explanations might be desirable. What you’d need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)
For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
Ah, I see! My immediate instinct is to say “okay, design a narrow AI to play the role of a teacher” but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that’s really not the point lol.
There is something to be said for interactivity though. In my experience, the best explanations I’ve seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively.
Also, something like a market for explanations might be desirable. What you’d need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)