Explaining is nothing but tracing out your own internal model’s inferential relationships between the concepts.
I disagree. Your internal model cannot be copied into anyone else’s head just by expounding it. To explain something successfully—that is, to get someone else to understand something—you have to take account of the state of the person you are explaining it to. An explanation that one person finds a model of clarity, another may find tedious and confusing. (I have seen both reactions to Eliezer’s article on Bayes’ theorem.)
When I am assisting students in a computer laboratory, and a student indicates they have a problem, the question I ask myself when I listen to them is “what information does this student need, and not have?” That is what I seek to provide, not a dump of my own thought processes around the subject.
I generally get favourable feedback, so I think I’m onto something here.
As a general rule, explanations share this property with software: until you have tried it and seen it work, you do not know that it works.
I agree with and practice all of that, so I was oversimplifying with the part you quoted. I should probably have said something more like,
“Explaining starts from tracing out your internal model’s inferential relationship between the concepts, and proceeds by finding how it can connect to—and if necessary, correct—the listener’s ontology.”
I disagree. Your internal model cannot be copied into anyone else’s head just by expounding it. To explain something successfully—that is, to get someone else to understand something—you have to take account of the state of the person you are explaining it to. An explanation that one person finds a model of clarity, another may find tedious and confusing. (I have seen both reactions to Eliezer’s article on Bayes’ theorem.)
When I am assisting students in a computer laboratory, and a student indicates they have a problem, the question I ask myself when I listen to them is “what information does this student need, and not have?” That is what I seek to provide, not a dump of my own thought processes around the subject.
I generally get favourable feedback, so I think I’m onto something here.
As a general rule, explanations share this property with software: until you have tried it and seen it work, you do not know that it works.
I agree with and practice all of that, so I was oversimplifying with the part you quoted. I should probably have said something more like,
“Explaining starts from tracing out your internal model’s inferential relationship between the concepts, and proceeds by finding how it can connect to—and if necessary, correct—the listener’s ontology.”