I’m familiar with the idea of cognitive predispositions, i.e., it is ‘easier’ to learn to fear a snake than to learn to fear a butterfly, and easier to learn to recognize a face than to recognize a word, so we’re not blank slates in any absolute sense. Still, I have trouble connecting this kind of biological predisposition with an upper bound on abstract intelligence—my naive intuition is that once you leave the realm of things that humans are pre-programmed to learn, all learning tasks of a given complexity are more or less equally difficult.
I’m sure there are biological limits as far as, e.g., how many thoughts I can keep in my head at once, or how many data points I can memorize in an hour, but I’m not sure what evidence there is that anyone has ever come close to bumping up against even one of these limits, let alone all of them. I like Socrates, but he didn’t have access to, e.g, modern pneumonics, or timed academic competitions, and so on.
Also, it seems likely that if you or I started to improve our thought now, and worked at it diligently for, say, 20 hours a week, that we would start to benefit from cyborg-style advances before we would run into hard limits on biological intelligence. E.g., Google already allows you to outsource a fair amount of vocab-style memorization; Yelp has a Monocle program that lets you superimpose an image of what restaurants are nearby over your ordinary vision; Wolfram Alpha solves general systems of complex equations expressed in more or less arbitrary fashion; so, barring a general collapse, it should only be a few years before we get practical technology for doing a whole lot of ‘thinking’ with our silicon accessories, even without any breakthroughs in terms of a mind-machine interface.
OK, clear enough terms.
I’m familiar with the idea of cognitive predispositions, i.e., it is ‘easier’ to learn to fear a snake than to learn to fear a butterfly, and easier to learn to recognize a face than to recognize a word, so we’re not blank slates in any absolute sense. Still, I have trouble connecting this kind of biological predisposition with an upper bound on abstract intelligence—my naive intuition is that once you leave the realm of things that humans are pre-programmed to learn, all learning tasks of a given complexity are more or less equally difficult.
I’m sure there are biological limits as far as, e.g., how many thoughts I can keep in my head at once, or how many data points I can memorize in an hour, but I’m not sure what evidence there is that anyone has ever come close to bumping up against even one of these limits, let alone all of them. I like Socrates, but he didn’t have access to, e.g, modern pneumonics, or timed academic competitions, and so on.
Also, it seems likely that if you or I started to improve our thought now, and worked at it diligently for, say, 20 hours a week, that we would start to benefit from cyborg-style advances before we would run into hard limits on biological intelligence. E.g., Google already allows you to outsource a fair amount of vocab-style memorization; Yelp has a Monocle program that lets you superimpose an image of what restaurants are nearby over your ordinary vision; Wolfram Alpha solves general systems of complex equations expressed in more or less arbitrary fashion; so, barring a general collapse, it should only be a few years before we get practical technology for doing a whole lot of ‘thinking’ with our silicon accessories, even without any breakthroughs in terms of a mind-machine interface.