No Really, Why Aren’t Rationalists Winning?

Re­ply to Ex­treme Ra­tion­al­ity: It’s Not That Great, Ex­treme Ra­tion­al­ity: It could Be Great, the Craft and the Com­mu­nity and Why Don’t Ra­tion­al­ists Win?

I’m go­ing to say some­thing which might be ex­tremely ob­vi­ous in hind­sight:

If LessWrong had origi­nally been tar­geted at and in­tro­duced to an au­di­ence of com­pe­tent busi­ness peo­ple and self-im­prove­ment health buffs in­stead of an au­di­ence of STEM spe­cial­ists and Harry Pot­ter fans, things would have been dras­ti­cally differ­ent. Ra­tion­al­ists would be win­ning.

Right now, ra­tio­nal­ists aren’t win­ning. Ra­tion­al­ity helps us choose which char­i­ties to donate to, and as Scott Alexan­der pointed out in 2009 it gives clar­ity of mind benefits. How­ever, as he also pointed out in the same ar­ti­cle, ra­tio­nal­ity doesn’t seem to be helping us win in in­di­vi­d­ual ca­reer or in­ter­per­sonal/​so­cial ar­eas of life.

It’s been nearly ten years since then, and I have yet to see any sign that this fact has changed. I con­sid­ered the pos­si­bil­ity that I just hadn’t heard about other ra­tio­nal­ists’ prac­ti­cal suc­cess due to hav­ing not be­come a ra­tio­nal­ist un­til around 2015, or sim­ply be­cause no one was talk­ing about their suc­cess. Then I re­al­ized that was silly. If ra­tio­nal­ists had started win­ning, at least one per­son would have posted about it here on less­wrong.com. I re­cently spoke to Scott Alexan­der, and he said he still agreed with ev­ery­thing he said in his ar­ti­cle.

So ra­tio­nal­ists aren’t win­ning. Why not? The Bayesian Con­spir­acy pod­cast (if I re­call cor­rectly), pro­posed the fol­low­ing ex­pla­na­tion in one of their epi­sodes: that ra­tio­nal­ity can only help us im­prove a limited amount rel­a­tive to where we started out. They pre­dicted that peo­ple who started out at a lower level of life suc­cess/​cog­ni­tive func­tion­ing/​tal­ent can­not out­perform non-ra­tio­nal­ists who started out at a suffi­ciently high level.

This ar­gu­ment is fun­da­men­tally a cop-out. When oth­ers win in places where we fail, it makes sense to ask, “How? What knowl­edge, skills, qual­ities or ex­pe­rience do they have which we don’t? And how might we ob­tain the same knowl­edge, skills, qual­ities or ex­pe­rience?” To say that oth­ers are sim­ply more in­nately tal­ented than we are, and leave it at that, doesn’t ex­plain the mechanism be­hind their hy­poth­e­sized greater rate of im­prove­ment af­ter learn­ing ra­tio­nal­ity. It tells us why but not how. And if there was such a mechanism, could we not repli­cate it so we could im­prove more any­way?

So why aren’t we win­ning? What’s the ac­tual mechanism be­hind our failure?

It’s be­cause we lack some of the skills we need to win—not be­cause we don’t want to win, and not be­cause we’re lazy.

Ra­tion­al­ists are very good at epistemic ra­tio­nal­ity. But there’s this thing that we’ve been refer­ring to as “in­stru­men­tal ra­tio­nal­ity” which we’re not so good at. I wouldn’t say it’s just one thing, though. In­stru­men­tal ra­tio­nal­ity seems like many differ­ent arts that we’re lump­ing to­gether.

It’s more than that, though. We’re not just lump­ing to­gether many differ­ent arts of ra­tio­nal­ity. As any­one who’s read the se­quence A Hu­man’s Guide to Words would know, cat­e­go­riza­tion and la­bel­ing are not neu­tral ac­tions for a hu­man. By clas­sify­ing all ra­tio­nal­ity as one of two types, epistemic or in­stru­men­tal, we limit our think­ing about ra­tio­nal­ity. As a re­sult of this clas­sifi­ca­tion, we fail to ac­knowl­edge the true shape of ra­tio­nal­ity’s similar­ity cluster.

The cluster’s true shape is that of in­stru­men­tal ra­tio­nal­ity: it is the art of win­ning, a.k.a. achiev­ing your val­ues. All ra­tio­nal­ity is in­stru­men­tal, and epistemic ra­tio­nal­ity is merely one ex­am­ple of it. The art of epistemic ra­tio­nal­ity is how you achieve the value of truth. Up un­til now, “in­stru­men­tal ra­tio­nal­ity” has been a catch-all term we’ve been us­ing for the arts of win­ning at ev­ery other value.

While achiev­ing the value of truth is ex­tremely use­ful for achiev­ing ev­ery other value, truth is still only one value among many. The skills needed to achieve other val­ues are not the same as the skills needed to achieve the value of truth. That is to say, epistemic ra­tio­nal­ity in­cludes the skill sets that are use­ful for ob­tain­ing truth and “in­stru­men­tal ra­tio­nal­ity” in­cludes all other skill sets.

Truth is a pre­cious and valuable thing. It’s just not enough by it­self to win in other ar­eas of life.

That might seem ob­vi­ous at face value. How­ever, I’m not sure we un­der­stand that on a gut level.

I have the im­pres­sion that many of us as­sume that so long as we have enough truth, ev­ery­thing else will sim­ply fall into place—that we’ll do ev­ery­thing else right au­to­mat­i­cally with­out need­ing to re­ally de­velop or prac­tice any other skills.

Per­haps that would be the case with enough com­put­ing power. An ar­tifi­cial su­per­in­tel­li­gence could per­haps play base­ball ex­tremely well with the fol­low­ing method:

1. Use math to calcu­late where the par­ti­cles in the bat, the ball, the air, and all the play­ers are mov­ing.

2. Pre­dict which par­ti­cles have to be moved to and from what po­si­tions in or­der to cause a chain re­ac­tion that re­sults in the goal state. In this case, the goal state would be a par­ti­cle con­figu­ra­tion that hu­mans would iden­tify as a won game of base­ball.

3. Move the key par­ti­cles to the key po­si­tions. If you fail to reach the goal state, ad­just your pri­ors ac­cord­ingly and re­peat the pro­cess.

An ar­tifi­cial su­per­in­tel­li­gence could per­haps nav­i­gate re­la­tion­ships, or dis­cover im­por­tant sci­en­tific truths, or re­ally any­thing else, all by this same method, pro­vided that it had enough time and com­put­ing power to do so.

But hu­mans are not ar­tifi­cial su­per­in­tel­li­gences. Our brains com­press in­for­ma­tion into caches for eas­ier stor­age. We will not suc­ceed at life just by un­der­stand­ing par­ti­cle physics, no mat­ter how much re­duc­tion­ism we do. As hu­mans, our be­liefs are or­ga­nized into cat­e­gor­i­cal lev­els. Even if we know that re­al­ity it­self is re­ally all just one level, our brains don’t have the space to con­tain enough par­ti­cle-level knowl­edge to suc­ceed at life (as­sum­ing that par­ti­cles re­ally are the base level, but we’ll leave that aside for now). We need that knowl­edge com­pressed into differ­ent cat­e­gor­i­cal lev­els or we can’t use it.

This in­cludes pro­ce­du­ral knowl­edge like “how many par­ti­cles need to be moved to and from what po­si­tions to win a game of base­ball”. If our brains were big enough to be ca­pa­ble of know­ing that, then all we would need to do to win is to ob­tain that knowl­edge and then out­put the cor­rect choice.

For an ar­tifi­cial su­per­in­tel­li­gence, once it has enough rele­vant knowl­edge, it would have all that it needs to make op­ti­mal de­ci­sions ac­cord­ing to its val­ues.

For a hu­man, given the limits of hu­man brains, hav­ing enough rele­vant knowl­edge isn’t the only thing needed to make bet­ter de­ci­sions. Hav­ing more knowl­edge can be ex­tremely use­ful for achiev­ing one’s other goals be­sides just knowl­edge for knowl­edge’s sake, but only if one has the mo­ti­va­tion, skills and ex­pe­rience to lev­er­age that knowl­edge.

Cur­rent ra­tio­nal­ists are re­ally good at ob­tain­ing knowl­edge, at least when we man­age to ap­ply our­selves. But we’re failing to lev­er­age that knowl­edge. For in­stance, we ought to be dom­i­nat­ing pre­dic­tion mar­kets and stock mar­kets and out­putting a dis­pro­por­tionately high num­ber of su­per­pre­dic­tors, to the point where other peo­ple no­tice and take an in­ter­est in how we man­aged to achieve such a thing.

In fact, bet­ting in pre­dic­tion mar­kets and stock mar­kets pro­vides an ex­ter­nal crite­ria for mea­sur­ing epistemic ra­tio­nal­ity—just as mar­tial arts suc­cess can be mea­sured by the ex­ter­nal crite­ria of hit­ting your op­po­nent.

So why haven’t we been dom­i­nat­ing pre­dic­tion and stock mar­kets? Why aren’t we dom­i­nat­ing them right now?

In my own case, I’m still an un­der­grad­u­ate col­lege stu­dent liv­ing largely off of my par­ents’ in­come. I can’t af­ford to bet on things since I don’t have enough money of my own for it, and my in­come is highly ir­reg­u­lar and hard to pre­dict so it’s difficult to bud­get things. I would need to ex­plain the ex­pense to my mother if I started bet­ting. If I did have more money of my own, though, I definitely would be spend­ing some of it on this. Do a lot of other peo­ple here have such ex­ten­u­at­ing cir­cum­stances? Some­how that would feel like too much of a co­in­ci­dence.

It’s more likely to be be­cause many of us haven’t learned the in­stru­men­tal skills needed to get our­selves to go out and bet. Such skills might in­clude time man­age­ment to set aside time to go bet, or in­ter­per­sonal/​com­mu­ni­ca­tion skills to make sure the terms of the bets are clear and that we’re only bet­ting against those who will abide by the terms once they’re set.

Pre­dic­tion mar­kets and stock mar­kets aren’t the only op­por­tu­nity that ra­tio­nal­ists are failing to take ad­van­tage of. For ex­am­ple, our com­mu­nity al­most en­tirely ne­glects pub­lic re­la­tions, de­spite its po­ten­tial as a way to sig­nifi­cantly in­crease staff and funds for the causes we care about by rais­ing the san­ity wa­ter­line. We need bet­ter in­ter­per­sonal/​com­mu­ni­ca­tion skills for in­ter­act­ing with the gen­eral pub­lic, and we need to learn to be more prag­matic so we will ac­tu­ally be able to get our­selves to do that in­stead of suc­cumb­ing to an ir­ra­tional deep-seated fear of ap­pear­ing cultish.

Com­pe­tent busi­ness peo­ple and self-im­prove­ment health buffs do have those skills. We don’t. That’s why we’re not win­ning.

In short, we need arts of ra­tio­nal­ity for the pur­suit of val­ues be­yond mere truth. One of my friends who has read the Se­quences has been spend­ing years work­ing on be­gin­ning to map out those other arts, and he re­cently pre­sented his work to me. It’s re­ally in­ter­est­ing. I hope you find it use­ful.

(Note: Said friend will be in­tro­duc­ing him­self on here and writ­ing a se­quence about his work later. When he does I will add the links here.)