Orthogonality

TL;DR: I used to think the best way to get re­ally good at skill was to spe­cial­ize by in­vest­ing lots of time into . I was wrong. In­vest­ing lots of time into works only as a first-or­der ap­prox­i­ma­tion. Once be­comes large, in­vest­ing in some other pro­duces greater real-world perfor­mance than con­tinued in­vest­ment in .


I like to think of in­tel­li­gence as a vec­tor where each is a skill level in a differ­ent skill. I think of gen­eral in­tel­li­gence is the Eu­clidean norm .

I use the Eu­clidean norm in­stead of the straight sum be­cause gen­er­al­ity of ex­pe­rience equals gen­er­al­ity of trans­fer­ence. Sup­pose you are ex­posed to a novel situ­a­tion re­quiring skill . You have no ex­pe­rience at so you must bor­row from your most similar skill. The wider a va­ri­ety of skills you have, more similar your most similar skill will be to .

The best way to in­crease your gen­eral in­tel­li­gence is to in­vest time into your weak­est skill . If your in­vested time for your strongest skill is already high then in­vest­ments in can also in­crease the real world perfor­mance of your strongest skill faster than in­vest­ments in .

Sup­pose you want to in­crease , your real world perfor­mance at . . In­vest­ing time into always re­sults in in­creas­ing . But even­tu­ally you will hit diminish­ing re­turns. For ev­ery there ex­ists a such that if then .

Here’s where things get in­ter­est­ing. “All non-triv­ial ab­strac­tions, to some de­gree, are leaky” and a sys­tem is only as se­cure as its weak­est link; crack­ing a sys­tem tends to hap­pen on an over­looked layer of ab­strac­tion. All real world ap­pli­ca­tions of skill are non-triv­ial ab­strac­tions. There­fore perfor­mance in one skill oc­ca­sion­ally leaks over to im­prove perfor­mance of ad­ja­cent skills. Your real-world perfor­mance at leaks over from ad­ja­cent skills on rungs above and be­low on the lad­der of ab­strac­tion.

Th­ese ad­ja­cent skills in­crease your real world perfor­mance on by a quan­tity in­de­pen­dent of . Since , there will in­evitably come a time when in­creas­ing in­creases less than in­creas­ing .

It fol­lows that quan­tity of av­o­ca­tions cor­re­lates pos­i­tively with win­ning No­bel Prizes, de­spite the time these hob­bies take time away from one’s spe­cial­iza­tion.

When I want to im­prove my abil­ity to write ma­chine learn­ing al­gorithms, my first in­stinct is to study ma­chine learn­ing. But in prac­tice, it’s of­ten more prof­itable to do some­thing seem­ingly un­re­lated, like learn­ing about mu­sic the­ory. I find it hard to fol­low this strat­egy be­cause it is so coun­ter­in­tu­itive.