Well, turnabout is fair play. I’m not an old fart, but I’ve been in a position known for pleading inability to convey their knowledge to the unwashed masses until they they get roughly the same experience. Specifically, that of a graduate student (in control systems), working on a problem at the boundary of current knowledge.
I was very interested in learning what it would be like to get to a state where I literally could not explain the problem I’m working on to people far outside my field (though of course that was not why I went to grad school).
And you know what? It never happened. To an intelligent person, I was always capable of bringing them up to speed on the problem I was on and the related mathematical tools and formalisms. It certainly took some time as I had to fill in the gaps in their mathematical background, but absolutely not on the order of years. Maybe an hour or two instead.
I might one day find strong enough evidence to reverse my position, but for now, as best I can tell, the excuse of “you have to gain years of experience to understand my position” has been so overused, that it is extremely weak evidence whenever someone offers such a self-assessment.
I think what’s happening is a combination of the “unwilling” and “unable” factors:
Unwilling: You take a deep hit to your status anytime you provide others with enough knowledge to obviate your sage wisdom.
Unable: If you haven’t gotten into the programmer’s mindset, you’re all too quick to assume any problem has to be done manually and can’t be converted into steps so simple that a machine could do it. “Nah nah, to play good chess requires special intuition, there’s no way you can just break it down into a rulebook.”
Many of the types of activities that people claim require years of experience to master and cannot be easily communicated are also the very types of activities that we currently have no good idea how to program a computer to master however.
I’m a little skeptical of appeals to experience but it does appear that there are certain skills that humans can master through practice but that they cannot easily explain in words. Over time computers are chipping away at the boundaries of problems that require special human intuition but there do still appear to be genuine skills that cannot currently be easily taught or expressed in software.
Perhaps chess was a bad example, since it’s not the human that contains Deep Blue’s rulebook. But this excuse is used for more than just difficult AI problems; it’s used for justifying moral intuitions, research inscrutability (like I thought I would see in grad school), and professional skills (like medical diagnosis, on which fairly simple expert systems beat out real doctors, and which analysis of images actually has extremely simple algorithms doctors didn’t know they were using).
In the majority of cases, it’s a simple matter of refusal to do the introspection necessary to identify what it is you’re actually thinking, and/or an unwillingness to take the hit to status (either because you lose your mystique or because you open yourself to scrutiny you don’t feel you deserve). “You” in the general sense, of course.
To give a particularly apt example, I was a libertarian for a long time before I first encountered rational justifications for laws and taboos regarding sex. And it’s because their proponents aren’t doing the introspection necessary to find these more general arguments.
The very fact that people found my point on homosexuality insightful, and that Morendil had never encountered the crucial underlying issue, despite (what I find to be) a sincere commitment to understanding it, is proof that a huge class of people (on a major debate, no less), were unjustifiably appealing to this inability to explain. That needs to stop.
While I don’t disagree with your general point, I think there are many cases where it’s not just a failure to introspect that makes it difficult for people to explain insights or skills that come from experience. Introspection just doesn’t work very well in such cases.
Your medical examples highlight this. The fact that doctors have difficulty explaining exactly what they are doing when they perform a diagnosis is not fixed simply by a bit of introspection. Reverse engineering the process or independently developing an algorithm with comparable performance is not trivial.
IIRC the biggest barrier, by far, to algorithmizing the doctor-priest class’s intuitional method of analyzing CAT scans, diagnosing, etc. was impediments by doctors themselves and their organizations, not the inherent difficulty of identifying the algorithm.
I think there are many cases where it’s not just a failure to introspect that makes it difficult for people to explain insights or skills that come from experience. Introspection just doesn’t work very well in such cases.
I accept that one might not be able to convey the experiences that lead to the insight, but one should be able to state the insight ex cathedra so that relevant counterarguments (those not dependent on having gone through the exerience) can be identified.
A characteristic example from the general population might be, “Hey, until you’ve actually lived through a plane crash like I have, you’ll never understand why I’m so skeptical about aviation safety protocols.” Well, no. Such an experience might give me a fear of flying, but that fear would be irrational. The fact that someone can’t directly impress that terror upon you does not substantiate their conclusion, and so the experience differential is irrelevant.
Hey, until you’ve actually lived through a plane crash like I have, you’ll never understand why I’m so skeptical about aviation safety protocols.
Chinese Neo-Confucian philosopher Cheng Yi suggested not only that this sort of thinking was reasonable, but indeed that the “personal experience” sort of knowledge (sometimes “genuine knowledge”) is superior, especially for moral behavior.
My response to your obvious question was the ::shrug::. I try to let loose my esoteric knowledge where it seems appropriate, even though I didn’t see much of a point this time.
Perhaps, “Even respected philosophers have gotten this wrong.”
Mind you, I do suspect the point you raise, while valid enough to consider, is nowhere near most (intolerant) people’s true rejection of homosexuality.
Agree with you that “you’ll get around to my view” could often be a reflex defense disguising “I know that I’m right but I can’t be bothered to examine my real reasons”.
I’d also like to point out that many Homosexuals wish to have children (in one form of reproduction or another)… At least today this is the case. I cannot say if it has always been the case though.
However, you are correct. It wouldn’t matter, as most people’s objections to homosexuality are based upon fear and disgust. Pity that...
In some contexts, yes; in others—like where they claim it’s extremely important for you to believe, and took them years to get there—it’s quite a high rate of return to convey that information to you in an hour, and eminently reasonable to do so.
Also, I strongly suspect that most people in research positions haven’t truly made their knowledge part of themselves, and so they couldn’t ground it in its ultimate purpose (i.e. show how it relates to the rest of the world and show relevance to a layperson) even if they were given infinite time.
(Yes, I know I link that article a lot, because it’s good.)
If I were wrong, you wouldn’t see people so often fumbling through their explanation of how to use calculus and statistics properly in their fields, and you’d see researchers more often breaking down their problem into a purely mathematical one and hand it off to the experts at that. I remember reading an article recently that showed how ecologists have just now gotten around to using the method of adjacency matrix eigenvectors (i.e. PageRank) to identify crucial species in an ecosystem.
Well, turnabout is fair play. I’m not an old fart, but I’ve been in a position known for pleading inability to convey their knowledge to the unwashed masses until they they get roughly the same experience. Specifically, that of a graduate student (in control systems), working on a problem at the boundary of current knowledge.
I was very interested in learning what it would be like to get to a state where I literally could not explain the problem I’m working on to people far outside my field (though of course that was not why I went to grad school).
And you know what? It never happened. To an intelligent person, I was always capable of bringing them up to speed on the problem I was on and the related mathematical tools and formalisms. It certainly took some time as I had to fill in the gaps in their mathematical background, but absolutely not on the order of years. Maybe an hour or two instead.
I might one day find strong enough evidence to reverse my position, but for now, as best I can tell, the excuse of “you have to gain years of experience to understand my position” has been so overused, that it is extremely weak evidence whenever someone offers such a self-assessment.
I think what’s happening is a combination of the “unwilling” and “unable” factors:
Unwilling: You take a deep hit to your status anytime you provide others with enough knowledge to obviate your sage wisdom.
Unable: If you haven’t gotten into the programmer’s mindset, you’re all too quick to assume any problem has to be done manually and can’t be converted into steps so simple that a machine could do it. “Nah nah, to play good chess requires special intuition, there’s no way you can just break it down into a rulebook.”
Many of the types of activities that people claim require years of experience to master and cannot be easily communicated are also the very types of activities that we currently have no good idea how to program a computer to master however.
I’m a little skeptical of appeals to experience but it does appear that there are certain skills that humans can master through practice but that they cannot easily explain in words. Over time computers are chipping away at the boundaries of problems that require special human intuition but there do still appear to be genuine skills that cannot currently be easily taught or expressed in software.
Perhaps chess was a bad example, since it’s not the human that contains Deep Blue’s rulebook. But this excuse is used for more than just difficult AI problems; it’s used for justifying moral intuitions, research inscrutability (like I thought I would see in grad school), and professional skills (like medical diagnosis, on which fairly simple expert systems beat out real doctors, and which analysis of images actually has extremely simple algorithms doctors didn’t know they were using).
In the majority of cases, it’s a simple matter of refusal to do the introspection necessary to identify what it is you’re actually thinking, and/or an unwillingness to take the hit to status (either because you lose your mystique or because you open yourself to scrutiny you don’t feel you deserve). “You” in the general sense, of course.
To give a particularly apt example, I was a libertarian for a long time before I first encountered rational justifications for laws and taboos regarding sex. And it’s because their proponents aren’t doing the introspection necessary to find these more general arguments.
The very fact that people found my point on homosexuality insightful, and that Morendil had never encountered the crucial underlying issue, despite (what I find to be) a sincere commitment to understanding it, is proof that a huge class of people (on a major debate, no less), were unjustifiably appealing to this inability to explain. That needs to stop.
While I don’t disagree with your general point, I think there are many cases where it’s not just a failure to introspect that makes it difficult for people to explain insights or skills that come from experience. Introspection just doesn’t work very well in such cases.
Your medical examples highlight this. The fact that doctors have difficulty explaining exactly what they are doing when they perform a diagnosis is not fixed simply by a bit of introspection. Reverse engineering the process or independently developing an algorithm with comparable performance is not trivial.
IIRC the biggest barrier, by far, to algorithmizing the doctor-priest class’s intuitional method of analyzing CAT scans, diagnosing, etc. was impediments by doctors themselves and their organizations, not the inherent difficulty of identifying the algorithm.
I accept that one might not be able to convey the experiences that lead to the insight, but one should be able to state the insight ex cathedra so that relevant counterarguments (those not dependent on having gone through the exerience) can be identified.
A characteristic example from the general population might be, “Hey, until you’ve actually lived through a plane crash like I have, you’ll never understand why I’m so skeptical about aviation safety protocols.” Well, no. Such an experience might give me a fear of flying, but that fear would be irrational. The fact that someone can’t directly impress that terror upon you does not substantiate their conclusion, and so the experience differential is irrelevant.
Chinese Neo-Confucian philosopher Cheng Yi suggested not only that this sort of thinking was reasonable, but indeed that the “personal experience” sort of knowledge (sometimes “genuine knowledge”) is superior, especially for moral behavior.
::shrug::
Heh, yeah, that Cheng Yi sure missed the mark, eh?
Wait—what was your point, again? :-/
My response to your obvious question was the ::shrug::. I try to let loose my esoteric knowledge where it seems appropriate, even though I didn’t see much of a point this time.
Perhaps, “Even respected philosophers have gotten this wrong.”
Oh. Sorry, I misunderstood. Carry on! :-)
Mind you, I do suspect the point you raise, while valid enough to consider, is nowhere near most (intolerant) people’s true rejection of homosexuality.
Agree with you that “you’ll get around to my view” could often be a reflex defense disguising “I know that I’m right but I can’t be bothered to examine my real reasons”.
I’d also like to point out that many Homosexuals wish to have children (in one form of reproduction or another)… At least today this is the case. I cannot say if it has always been the case though.
However, you are correct. It wouldn’t matter, as most people’s objections to homosexuality are based upon fear and disgust. Pity that...
That’s often too long to be reasonable, of course.
In some contexts, yes; in others—like where they claim it’s extremely important for you to believe, and took them years to get there—it’s quite a high rate of return to convey that information to you in an hour, and eminently reasonable to do so.
Also, I strongly suspect that most people in research positions haven’t truly made their knowledge part of themselves, and so they couldn’t ground it in its ultimate purpose (i.e. show how it relates to the rest of the world and show relevance to a layperson) even if they were given infinite time.
(Yes, I know I link that article a lot, because it’s good.)
If I were wrong, you wouldn’t see people so often fumbling through their explanation of how to use calculus and statistics properly in their fields, and you’d see researchers more often breaking down their problem into a purely mathematical one and hand it off to the experts at that. I remember reading an article recently that showed how ecologists have just now gotten around to using the method of adjacency matrix eigenvectors (i.e. PageRank) to identify crucial species in an ecosystem.