[Question] Why so much variance in human intelligence?

Epistemic sta­tus: Prac­tis­ing think­ing aloud. There might be an im­por­tant ques­tion here, but I might be mak­ing a sim­ple er­ror.

There is a lot of var­i­ance in gen­eral com­pe­tence be­tween species. Here is the stan­dard Bostrom/​Yud­kowsky graph to dis­play this no­tion.

There’s a sense that while some mice are more ge­net­i­cally fit than oth­ers, they’re broadly all just mice, bound within a rel­a­tively nar­row range of com­pe­tence. Chimps should not be wor­ried about most mice, in the short or long term, but they also shouldn’t worry es­pe­cially so about peak mice—there’s no in­cred­ibly strong or cun­ning mouse they ought to look out for.

How­ever, my in­tu­ition is very differ­ent for hu­mans. While I un­der­stand that hu­mans are all broadly similar, that a sin­gle hu­man can­not have a com­plex adap­ta­tion that is not uni­ver­sal [1], I also have many be­liefs that hu­mans differ mas­sively in cog­ni­tive ca­pac­i­ties in ways that can lead to ma­jor dis­par­i­ties in gen­eral com­pe­tence. The differ­ence be­tween some­one who does un­der­stand calcu­lus and some­one who does not, is the differ­ence be­tween some­one who can build a rocket and some­one who can­not. And I think I’ve tried to teach peo­ple that kind of math, and some­times suc­ceeded, and some­times failed to even teach ba­sic frac­tions.

I can try to op­er­a­tional­ise my hy­poth­e­sis: if the av­er­age hu­man in­tel­li­gence was low­ered to be equal to an IQ of 75 in pre­sent day so­ciety, that so­ciety could not have built rock­ets or do a lot of other en­g­ineer­ing and sci­ence.

(Si­de­note: I think the hope of iter­ated am­plifi­ca­tion is that this is false. That if I have enough hu­mans with hard limits to how much think­ing they can do, stack­ing lots of them can still pro­duce all the in­tel­lec­tual progress we’re go­ing to need. My ini­tial thought is that this doesn’t make sense, be­cause there are many in­tel­lec­tual feats like writ­ing a book or com­ing up with spe­cial rel­a­tivity that I gen­er­ally ex­pect in­di­vi­d­u­als (situ­ated within a con­ducive cul­ture and in­sti­tu­tions) to be much bet­ter at than groups of in­di­vi­d­u­als (e.g. com­pa­nies).

This is also my un­der­stand­ing of Eliezer’s cri­tique, that while it’s pos­si­ble to get hu­mans with hard limits on cog­ni­tion to make math­e­mat­i­cal progress, it’s by run­ning an al­gorithm on them that they don’t un­der­stand, not run­ning an al­gorithm that they do un­der­stand, and only if they un­der­stand it do you get nice prop­er­ties about them be­ing al­igned in the same way you might feel many hu­mans are to­day.

It’s likely I’m wrong about the mo­ti­va­tion be­hind Iter­ated Am­plifi­ca­tion though.)

This hy­poth­e­sis doesn’t im­ply that some­one who can do suc­cess­ful ab­stract rea­son­ing is strictly more com­pe­tent than a whole so­ciety of peo­ple who can­not. The Se­cret of our Suc­cess talks about how smart mod­ern in­di­vi­d­u­als stranded in forests fail to de­velop ba­sic food prepa­ra­tion tech­niques that other, prim­i­tive cul­tures were able to build.

I’m say­ing that a cul­ture with no peo­ple who can do calcu­lus will in the long run score ba­si­cally zero against the ac­com­plish­ments of a cul­ture with peo­ple who can.

One ques­tion is why we’re in a cul­ture so pre­car­i­ously bal­anced on this split be­tween “can take off to the stars” and “mostly can­not”. An idea I’ve heard is that if a cul­ture is eas­ily able to reach tech­nolog­i­cally ma­tu­rity, it will come later than a cul­ture who is barely able to be­come tech­nolog­i­cally ma­tu­rity, be­cause evolu­tion works over much longer time scales than cul­ture + tech­nolog­i­cal in­no­va­tion. As such, if you ob­serve your­self to be in a cul­ture that is able to reach tech­nolog­i­cally ma­tu­rity, you’re prob­a­bly “the stupi­dest such cul­ture that could get there, be­cause if it could be done at a stupi­der level then it would’ve hap­pened there first.”

As such, we’re a species whereby if we try as hard as we can, if we take brains op­ti­mised for so­cial co­or­di­na­tion and make them do math, then we can just about reach tech­ni­cal ma­tu­rity (i.e. build nan­otech, AI, etc).

That may be true, but the ques­tion I want to ask about is what is it about hu­mans, cul­ture and brains that al­lows for such high var­i­ance within the species, that isn’t true about mice and chimps? Some­thing about this is still con­fus­ing to me. Like, if it is the case that some hu­mans are able to do great feats of en­g­ineer­ing like build rock­ets that land, and some aren’t, what’s the differ­ence be­tween these hu­mans that causes such mas­sive changes in out­come? Be­cause, as above, it’s not some big com­plex ge­netic adap­ta­tion some have and some don’t. I think we’re all run­ning pretty similar ge­netic code.

Is there some sim­ple amount of work­ing mem­ory that’s re­quired to do com­plex re­cur­sion? Like, 6 work­ing mem­ory slots makes things way harder than 7?

I can imag­ine that there are many hacks, and not a sin­gle thing. I’m re­minded of the story of Richard Feyn­man learn­ing to count time, where he’d prac­tice be­ing able to count a whole minute. He’d do it while do­ing the laun­dry, while cook­ing break­fast, and so on. He later met the math­e­mat­i­cian John Tukey, who could do the same, but they had some fierce dis­agree­ments. Tukey said you couldn’t do it while read­ing the news­pa­per, and Feyn­man said he could. Feyn­man said you couldn’t do it while hav­ing a con­ver­sa­tion, and Tukey said they could. They then both sur­prised each other by do­ing ex­actly what they said they could.

It turned out Feyn­man was hear­ing num­bers be­ing spo­ken, whereas Tukey was vi­su­al­is­ing the num­bers tick­ing over. So Feyn­man could still read at the same time, and his friend could still listen and talk.

The idea here is that if you’re un­able to use one type of cog­ni­tive re­source, you may make up for it with an­other. This is prob­a­bly the same situ­a­tion as when you make trade-offs be­tween space and time in com­pu­ta­tional com­plex­ity.

So I can imag­ine differ­ent hu­mans find­ing differ­ent hacky ways to build up the skill to do very ab­stract truth-track­ing think­ing. Per­haps you have a lit­tle less work­ing mem­ory than av­er­age, but you have a great ca­pac­ity for vi­su­al­i­sa­tion, and pri­mar­ily work in ar­eas that lend them­selves to ge­o­met­ric /​ spa­cial think­ing. Or per­haps your cul­ture can be very con­ducive to ab­stract thought in some way.

But even if this is right I’m in­ter­ested in the de­tails of what the key vari­ables ac­tu­ally are.

What are your thoughts?

[1] Note: hu­mans can lack im­por­tant pieces of ma­chin­ery.