(This post was recently linked to by Kaj Sotala as a good explanation of “woo” topics, which is why I am commenting on it now.)
“Neo: What are you trying to tell me? That I can dodge bullets?
Morpheus: No, Neo. I’m trying to tell you that when you’re ready, you won’t have to.”
So, the thing about this quote is, it’s actually a great example of someone giving a willfully obscurantist and unhelpful and untrue answer to a perfectly sensible question.
After all, Neo really does learn to dodge bullets. He does! Look, here is a video:
There’s Neo, dodging bullets like nobody’s business.
I’m not the first to notice that Morpheus seems to simply have been bad at explaining things. If he had genuinely been interested in answering Neo’s question, instead of trying to sound like a cryptic mystical mentor stereotype, mightn’t Morpheus have said something more like:
Morpheus: Yes, Neo. That is exactly what I’m telling you. You are going to learn how to dodge bullets. And you will gain a number of other very impressive superpowers—many of which will also be useful in combat. In fact, some of those superpowers may well obviate any need for you to dodge bullets! (For example, by telekinetically controlling the bullets so that they don’t even hit you, or by just destroying your enemies with a thought, etc.) But, of course, you will totally also be able to dodge bullets if that’s what you feel is appropriate in any given situation.
The point, if you like, is that if you’re asked to explain some “woo” or “mysticism” or whatever, and you find yourself sounding like Morpheus sounds in the movie, you’re doing it wrong.
I’m as sure as I can be that both of these have happened exactly as described, and that the people in them experienced exactly what’s described. … [two stories]
You say that you’re as sure as you can be that these things happened as you describe. Fair enough. The problem is that “as sure as you can be” is not, actually, very sure at all! In fact, given what you’ve described, you really have almost no reason to believe that these events happened as you say—and plenty of reason not to so believe. In other words, the evidence you have for believing that these events took place as described, is extremely weak; and their prior probability is very, very low. Based on what you have told us, the correct epistemic state for you to be in, concerning these two events, is “they probably did not happen as I have here described them”.
What is important to note, here, is that your evaluation of the state of the evidence in these cases, your conclusions about them, and your judgment of their value as illustrative cases for your post’s claims and general perspective, themselves all constitute Bayesian evidence—for the reader—toward evaluating this post, and the claims therein.
The point, if you like, is that if you’re asked to explain some “woo” or “mysticism” or whatever, and you find yourself sounding like Morpheus sounds in the movie, you’re doing it wrong.
One thing that’s interesting about explanations and tutoring is that often, when a student asks a question, two things come up: the answer to the question as they asked it and the question they should have asked instead. On StackOverflow, this gets referred to as the XY problem. The general advice there lines up with yours—both answer the question they asked, and point at how they could know whether or not it’s the right question—but it’s not obvious to me that also applies for non-technical contexts, where these moments of frustration with one’s model or approach might be the primary opportunities for making progress on the Y problem. If that opportunity would evaporate when you presented a solution to X, then Morpheus’s strategy seems better.
… these moments of frustration with one’s model or approach might be the primary opportunities for making progress on the Y problem. If that opportunity would evaporate when you presented a solution to X, then Morpheus’s strategy seems better.
I am highly skeptical of such claims—but for the sake of argument, let’s grant the possibility, in full and without reservation.
Now suppose I ask you such an “X question”, and you do the usual thing, where you refuse to answer my question, or claim that you are answering it but actually give a Morpheus-style non-answer, etc., all because (so you claim) you’re trying to help me progress on the “Y problem”.
And suppose that actually there is an answer to my question, and you do know this answer, and thus you could, if you wanted to, simply answer my question (while knowing that ultimately, in some greater or more important sense, the answer will not help me).
And now suppose I say to you: “Yes, yes, I understand that you think answering my ‘X question’ won’t help me; I understand that you’re trying to help me solve some ‘Y problem’ that you think is actually my problem, or that you think is more important, etc., etc. I understand that you’re trying to be helpful. But I want you to answer my actual question anyway.”
If, in response to this, you still refuse to give a straight answer (again, recall that we’re assuming that you could easily answer my “X question”!), then I must conclude that you are—and I hope you’ll forgive the language—rather a huge asshole. Because that’s what it is, when you judge yourself to have more right to make decisions on my behalf than I do, in direct contravention to my explicitly stated choices, and when you so blatantly disregard my agency. After all, what else is it, to say: “No, for all your protestations, I simply know better than you what information you should acquire and when and how and in what order, and what your goals ought to be vis-a-vis your epistemic advancement; and I, and not you, have the right to decide what you ought to be told, and what you ought not be told; and your wishes in the matter are simply irrelevant.”
… the other possibility, of course, is that one of our assumptions fails to hold. (Perhaps the one about there actually being an answer to the “X question”, and you knowing the answer?)
In any case, that’s the dilemma: blatant disregard for agency and autonomy, or intellectual fraud. Either is entirely possible, a priori; in tech-oriented communities I have encountered quite a bit of the former, for example. Determining which of these is the case for the topic at hand is, I suppose, an exercise for the reader.
EDIT: What is also, in my experience, quite indicative, is whether the answer-giver admits that he could simply answer the given “X question”, and makes his refusal straightforward; or whether he gives (apparently-)obscurantist responses even to attempts to determine his policy.
For example, I have encountered situations where a technical “X question” was asked, and a knowledgeable respondent, upon being pressed by the questioner, said something along these lines: “Yes, indeed I could simply answer your question, which does, as stated, have a straightforward answer, which I know and could give you. But I will not do this; because if I do, then whatever task you’re trying to accomplish, you will mess up, and you will—believe me, newbie, I’ve seen this play out many times before!—you will be angry at me, and you’ll come back here and you’ll have bigger problems and pester me with more questions, and all this can be avoided if you accept my judgment and advice, as I am your superior in these matters and I am telling you what you need to know, not what you want to know.”
Well, fair enough. There’s still a good chance that this person is being a jerk, of course. (But perhaps understandably so? After all, among the ranks of clueless newbies there are quite few who can take responsibility for their decisions, and who will know not to blame the honest question-answerer for their own shortsightedness…) But at least we (in the role of questioner) know where we stand! At least the respondent makes clear to us that he is outright refusing to give an answer—and we can judge him for his choices, fair and square.
But how do we judge a Morpheus-wannabe? Is he able to answer, but refuses? Or is he feeding us a line of mumbo-jumbo because there is no answer? Lack of clarity even at the conversational meta level—that is a very bad sign!
To be clear, I agree with this, which is a reason why I generally try to give the involved explanation that bridges to where (I think) the other person is; in my experience the opportunity rarely evaporates, and even if it does a straightforward refusal like you mention seems more promising.
The point, if you like, is that if you’re asked to explain some “woo” or “mysticism” or whatever, and you find yourself sounding like Morpheus sounds in the movie, you’re doing it wrong.
In my opinion this is true about most mentors in fiction. The mentoring we see on screen tends to be shitty mentoring, presumably because the writers or bosses believe that showing actual mentoring will lead to a less dramatic story.
So mentors in fiction should not be used as role models.
(This post was recently linked to by Kaj Sotala as a good explanation of “woo” topics, which is why I am commenting on it now.)
So, the thing about this quote is, it’s actually a great example of someone giving a willfully obscurantist and unhelpful and untrue answer to a perfectly sensible question.
After all, Neo really does learn to dodge bullets. He does! Look, here is a video:
https://www.youtube.com/watch?v=Kc4cBiSXoCs
There’s Neo, dodging bullets like nobody’s business.
I’m not the first to notice that Morpheus seems to simply have been bad at explaining things. If he had genuinely been interested in answering Neo’s question, instead of trying to sound like a cryptic mystical mentor stereotype, mightn’t Morpheus have said something more like:
The point, if you like, is that if you’re asked to explain some “woo” or “mysticism” or whatever, and you find yourself sounding like Morpheus sounds in the movie, you’re doing it wrong.
You say that you’re as sure as you can be that these things happened as you describe. Fair enough. The problem is that “as sure as you can be” is not, actually, very sure at all! In fact, given what you’ve described, you really have almost no reason to believe that these events happened as you say—and plenty of reason not to so believe. In other words, the evidence you have for believing that these events took place as described, is extremely weak; and their prior probability is very, very low. Based on what you have told us, the correct epistemic state for you to be in, concerning these two events, is “they probably did not happen as I have here described them”.
What is important to note, here, is that your evaluation of the state of the evidence in these cases, your conclusions about them, and your judgment of their value as illustrative cases for your post’s claims and general perspective, themselves all constitute Bayesian evidence—for the reader—toward evaluating this post, and the claims therein.
One thing that’s interesting about explanations and tutoring is that often, when a student asks a question, two things come up: the answer to the question as they asked it and the question they should have asked instead. On StackOverflow, this gets referred to as the XY problem. The general advice there lines up with yours—both answer the question they asked, and point at how they could know whether or not it’s the right question—but it’s not obvious to me that also applies for non-technical contexts, where these moments of frustration with one’s model or approach might be the primary opportunities for making progress on the Y problem. If that opportunity would evaporate when you presented a solution to X, then Morpheus’s strategy seems better.
I am highly skeptical of such claims—but for the sake of argument, let’s grant the possibility, in full and without reservation.
Now suppose I ask you such an “X question”, and you do the usual thing, where you refuse to answer my question, or claim that you are answering it but actually give a Morpheus-style non-answer, etc., all because (so you claim) you’re trying to help me progress on the “Y problem”.
And suppose that actually there is an answer to my question, and you do know this answer, and thus you could, if you wanted to, simply answer my question (while knowing that ultimately, in some greater or more important sense, the answer will not help me).
And now suppose I say to you: “Yes, yes, I understand that you think answering my ‘X question’ won’t help me; I understand that you’re trying to help me solve some ‘Y problem’ that you think is actually my problem, or that you think is more important, etc., etc. I understand that you’re trying to be helpful. But I want you to answer my actual question anyway.”
If, in response to this, you still refuse to give a straight answer (again, recall that we’re assuming that you could easily answer my “X question”!), then I must conclude that you are—and I hope you’ll forgive the language—rather a huge asshole. Because that’s what it is, when you judge yourself to have more right to make decisions on my behalf than I do, in direct contravention to my explicitly stated choices, and when you so blatantly disregard my agency. After all, what else is it, to say: “No, for all your protestations, I simply know better than you what information you should acquire and when and how and in what order, and what your goals ought to be vis-a-vis your epistemic advancement; and I, and not you, have the right to decide what you ought to be told, and what you ought not be told; and your wishes in the matter are simply irrelevant.”
… the other possibility, of course, is that one of our assumptions fails to hold. (Perhaps the one about there actually being an answer to the “X question”, and you knowing the answer?)
In any case, that’s the dilemma: blatant disregard for agency and autonomy, or intellectual fraud. Either is entirely possible, a priori; in tech-oriented communities I have encountered quite a bit of the former, for example. Determining which of these is the case for the topic at hand is, I suppose, an exercise for the reader.
EDIT: What is also, in my experience, quite indicative, is whether the answer-giver admits that he could simply answer the given “X question”, and makes his refusal straightforward; or whether he gives (apparently-)obscurantist responses even to attempts to determine his policy.
For example, I have encountered situations where a technical “X question” was asked, and a knowledgeable respondent, upon being pressed by the questioner, said something along these lines: “Yes, indeed I could simply answer your question, which does, as stated, have a straightforward answer, which I know and could give you. But I will not do this; because if I do, then whatever task you’re trying to accomplish, you will mess up, and you will—believe me, newbie, I’ve seen this play out many times before!—you will be angry at me, and you’ll come back here and you’ll have bigger problems and pester me with more questions, and all this can be avoided if you accept my judgment and advice, as I am your superior in these matters and I am telling you what you need to know, not what you want to know.”
Well, fair enough. There’s still a good chance that this person is being a jerk, of course. (But perhaps understandably so? After all, among the ranks of clueless newbies there are quite few who can take responsibility for their decisions, and who will know not to blame the honest question-answerer for their own shortsightedness…) But at least we (in the role of questioner) know where we stand! At least the respondent makes clear to us that he is outright refusing to give an answer—and we can judge him for his choices, fair and square.
But how do we judge a Morpheus-wannabe? Is he able to answer, but refuses? Or is he feeding us a line of mumbo-jumbo because there is no answer? Lack of clarity even at the conversational meta level—that is a very bad sign!
To be clear, I agree with this, which is a reason why I generally try to give the involved explanation that bridges to where (I think) the other person is; in my experience the opportunity rarely evaporates, and even if it does a straightforward refusal like you mention seems more promising.
In my opinion this is true about most mentors in fiction. The mentoring we see on screen tends to be shitty mentoring, presumably because the writers or bosses believe that showing actual mentoring will lead to a less dramatic story.
So mentors in fiction should not be used as role models.