“Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform.”
This is a very general lesson, the depth and applicability of which can scarcely be overstated. A few thoughts:
1) In its more banal form it plagues us as the Curse of Knowledge. I’m an English teacher in South Korea, and despite six months on the job I have to constantly remind myself that just because it’s easy for me to say “rollerskating lollipops” doesn’t mean it’s inherently easy. Because of my English fluency it’s literally a challenge to speak slowly enough that my less advanced students can hear each letter. This is, I think, one of the rationality lessons that gets baked into you by time spent immersed in another culture.
2) Far be it from me to speculate confidently on AI in present company, but it seems to me that one of the principle hurdles in AI development was and is actually appreciating how complicated human functioning is. The current work of the SIAI on FAI appears to this outsider to be an extension of this half-century long program. Human value, like human speech and human object recognition and everything else, is way more complicated than it seems introspectively. Better get a grasp on our own goal architecture before we create an Indifferent Alien God with a human-incompatible ontology that re-appropriates our matter to tile the solar system in a computational substrate.
3) To veer down a tangent a bit, a similar phenomenon is at play in various religious arguments with which I’m familiar. People like William Lane Craig sneak in the enormously complex hypothesis of “God”, claiming that it is the best explanation for the facts and dressing it up with a lot of hand-waving about “Occam’s razor” and “not multiplying entities beyond necessity”. Elsewhere on this blog Eliezer pointed out in a different context that one concept isn’t simpler by virtue of being more familiar. Humans don’t think smoothly about calculus and cosmology the same way they do about conscious agents, so “GOD” just feels like the most parsimonious explanation.
We must always remember that concepts and actions are not necessarily simple just because we all use them naturally and fluently. Our brain has evolved to enable just such feats.
Yeah, but my native language’s /r/ is not quite the same as English /r/. (I have accidentally used the former when saying “where is” and been misunderstood by native speakers as saying “what is” as a result.)
“Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform.”
This is a very general lesson, the depth and applicability of which can scarcely be overstated. A few thoughts:
1) In its more banal form it plagues us as the Curse of Knowledge. I’m an English teacher in South Korea, and despite six months on the job I have to constantly remind myself that just because it’s easy for me to say “rollerskating lollipops” doesn’t mean it’s inherently easy. Because of my English fluency it’s literally a challenge to speak slowly enough that my less advanced students can hear each letter. This is, I think, one of the rationality lessons that gets baked into you by time spent immersed in another culture.
2) Far be it from me to speculate confidently on AI in present company, but it seems to me that one of the principle hurdles in AI development was and is actually appreciating how complicated human functioning is. The current work of the SIAI on FAI appears to this outsider to be an extension of this half-century long program. Human value, like human speech and human object recognition and everything else, is way more complicated than it seems introspectively. Better get a grasp on our own goal architecture before we create an Indifferent Alien God with a human-incompatible ontology that re-appropriates our matter to tile the solar system in a computational substrate.
3) To veer down a tangent a bit, a similar phenomenon is at play in various religious arguments with which I’m familiar. People like William Lane Craig sneak in the enormously complex hypothesis of “God”, claiming that it is the best explanation for the facts and dressing it up with a lot of hand-waving about “Occam’s razor” and “not multiplying entities beyond necessity”. Elsewhere on this blog Eliezer pointed out in a different context that one concept isn’t simpler by virtue of being more familiar. Humans don’t think smoothly about calculus and cosmology the same way they do about conscious agents, so “GOD” just feels like the most parsimonious explanation.
We must always remember that concepts and actions are not necessarily simple just because we all use them naturally and fluently. Our brain has evolved to enable just such feats.
Succeeded at the first attempt. :-) (Now, “red lorry, yellow lorry”—that’s hard.)
You have an easy job compared to Koreans whose language doesn’t have distinct phonemes for /r/ and /l/.
Yeah, but my native language’s /r/ is not quite the same as English /r/. (I have accidentally used the former when saying “where is” and been misunderstood by native speakers as saying “what is” as a result.)