Could a digital intelligence be bad at math?

One of the en­dur­ing traits that I see in most char­ac­ter­i­za­tions of ar­tifi­cial in­tel­li­gences is the idea that an AI would have all of the skills that com­put­ers have. It’s of­ten taken for granted that a gen­eral ar­tifi­cial in­tel­li­gence would be able to perfectly re­call in­for­ma­tion, in­stantly mul­ti­ply and di­vide 5 digit num­bers, and hand­ily defeat Gary Kas­parov at chess. For what­ever rea­son, the ca­pa­bil­ities of a digi­tal in­tel­li­gence are always seen as en­com­pass­ing the en­tire cur­rent skill set of digi­tal ma­chines.

But this be­lief is profoundly strange. Con­sider how much hu­mans strug­gle to learn ar­ith­metic. Ba­sic ar­ith­metic is re­ally sim­ple. You can build a bare bones elec­tronic calcu­la­tor/​ar­ith­metic logic unit on a bread­board in a week­end. Yet hu­mans com­monly spend years learn­ing how to perform those same sim­ple op­er­a­tions. And the men­tal ar­ith­metic equip­ment hu­mans as­sem­ble at the end of this is still rel­a­tively ter­rible: slow, la­bor in­ten­sive, and prone to fre­quent mis­takes.

It is not to­tally clear why hu­mans are this bad at math. It is al­most cer­tainly un­re­lated to brains com­put­ing us­ing neu­rons in­stead of tran­sis­tors. Based on per­sonal ex­pe­rience and a cur­sory liter­a­ture re­view, count­ing seems to rely pri­mar­ily on iden­ti­fy­ing re­peated struc­tures in a linked list, and seems to be stored as ver­bal mem­ory. When we first learn the most ba­sic ar­ith­metic we rely on vi­sual pat­tern match­ing, and as we do more math ba­sic math op­er­a­tions get stored in a look-up table in ver­bal mem­ory. This is an ab­solutely bonkers way to im­ple­ment ar­ith­metic.

While hu­mans may be gen­er­ally in­tel­li­gent, that gen­eral in­tel­li­gence seems to be ac­com­plished us­ing some fairly in­el­e­gant kludges. We seem to have a preferred frame­work for un­der­stand­ing built on our vi­sual and ver­bal sys­tems, and we tend to shoe­horn ev­ery­thing else into that frame­work. But there’s noth­ing uniquely hu­man about that prob­lem. It seems to be char­ac­ter­is­tic of learn­ing al­gorithms in gen­eral, and so if our ar­tifi­cial learner started off by learn­ing skills un­re­lated to math, it might learn ar­ith­metic via a similarly con­voluted pro­cess. While cur­rent digi­tal ma­chines do ar­ith­metic via a very effi­cient pro­cess, a digi­tal mind that has to learn those pat­terns may ar­rive at a solu­tion as slow and con­voluted as the one hu­mans rely on.