# Two Truths and a Lie

Re­sponse to Man-with-a-ham­mer syn­drome.

It’s been claimed that there is no way to spot Affec­tive Death Spirals, or cultish ob­ses­sion with the One Big Idea of Every­thing. I’d like to posit a sim­ple way to spot such er­ror, with the caveat that it may not work for ev­ery case.

There’s an old game called Two Truths and a Lie. I’d bet al­most ev­ery­one’s heard of it, but I’ll sum­ma­rize it just in case. A per­son makes three state­ments, and the other play­ers must guess which of those state­ments is false. The state­ment-maker gets points for fool­ing peo­ple, peo­ple get points for not be­ing fooled. That’s it. I’d like to pro­pose a ra­tio­nal­ist’s ver­sion of this game that should serve as a nifty check on cer­tainAffec­tive Death Spirals, run­away The­ory-Of-Every­things, and Perfectly Gen­eral Ex­pla­na­tions. It’s al­most as sim­ple.

Say you have a the­ory about hu­man be­havi­our. Get a friend to do a lit­tle re­search and as­sert three fac­tual claims about how peo­ple be­have that your the­ory would re­al­is­ti­cally ap­ply to. At least one of these claims must be false. See if you can ex­plain ev­ery claim us­ing your the­ory be­fore learn­ing which one’s false.

If you can come up with a con­vinc­ing ex­pla­na­tion for all three state­ments, you must be very cau­tious when us­ing your One The­ory. If it can ex­plain false­hoods, there’s a very high risk you’re go­ing to use it to jus­tify what­ever prior be­liefs you have. Even worse, you may use it to in­fer facts about the world, even though it is clearly not con­sis­tent enough to do so re­li­ably. You must ex­er­cise the ut­most cau­tion in ap­ply­ing your One The­ory, if not aban­don re­li­ance on it al­to­gether. If, on the other hand, you can’t come up with a con­vinc­ing way to ex­plain some of the state­ments, and those turn out to be the false ones, then there’s at least a chance you’re on to some­thing.

Come to think of it, this is an ex­cel­lent challenge to any pro­po­nent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can dis­crim­i­nate. Just re­mem­ber to be ruth­less when they get it wrong; it doesn’t prove their idea is to­tally wrong, only that re­li­ance upon it would be.

Edited to clar­ify: My ar­gu­ment is not that one should sim­ply aban­don a the­ory al­to­gether. In some cases, this may be jus­tified, if all the the­ory has go­ing for it is its pre­dic­tive power, and you show it lacks that, toss it. But in the case of broad, com­plex the­o­ries that ac­tu­ally can ex­plain many di­ver­gent out­comes, this ex­er­cise should teach you not to rely on that the­ory as a means of in­fer­ence. Yes, you should be­lieve in evolu­tion. No, you shouldn’t make broad in­fer­ences about hu­man be­havi­our with­out any data be­cause they are con­sis­tent with evolu­tion, un­less your ap­pli­ca­tion of the the­ory of evolu­tion is so pre­cise and well-in­formed that you can con­sis­tently pass the Two-Truths-and-a-Lie Test.

• It’s an in­ter­est­ing ex­per­i­ment, and prob­a­bly a good teach­ing ex­er­cise un­der con­trol­led con­di­tions to teach peo­ple about falsifi­ca­tion­ism, but real the­o­ries are too com­plex and the­o­ries about hu­man be­hav­ior are way too com­plex.

Take the “slam dunk” the­ory of evolu­tion. If “Some peo­ple and an­i­mals are ho­mo­sex­ual” was in there, I’d pick that as the lie with­out even look­ing at the other two (well, if I didn’t already know). There are some okay ex­pla­na­tions of how ho­mo­sex­u­al­ity might fit into evolu­tion, but they’re not the sort of thing most peo­ple would start think­ing about un­less they already knew ho­mo­sex­u­al­ity ex­isted.

(Another ex­am­ple: plate tec­ton­ics and “Hawaii, right smack in the mid­dle of a huge plate, is full of vol­ca­noes”.)

• Take the “slam dunk” the­ory of evolu­tion. If “Some peo­ple and an­i­mals are ho­mo­sex­ual” was in there, I’d pick that as the lie with­out even look­ing at the other two (well, if I didn’t already know).

A ra­tio­nal­ist ends up be­ing wrong some­times, and can only hope for well-cal­ibrated prob­a­bil­ities. I think that, in the ab­sence of ob­ser­va­tion, this is the sort of pre­dic­tion that most hu­man-level in­tel­li­gences would end up get­ting wrong, and I wouldn’t nec­es­sar­ily as­sume they were mak­ing any er­rors of ra­tio­nal­ity in do­ing so, but rather hit­ting the 1 out of 20 oc­ca­sions when a 5% prob­a­bil­ity oc­curs.

• it doesn’t prove their idea is to­tally wrong, only that re­li­ance upon it would be.

As that bit shows, I agree com­pletely. But while evolu­tion is cor­rect, you can’t use it to go around mak­ing broad fac­tual in­fer­ences. While you should be­lieve in evolu­tion, you shouldn’t go around mak­ing state­ments like, “There are no ho­mo­sex­u­als,” or “Every be­havi­our is adap­tive in a fairly ob­vi­ous way,” just be­cause your the­ory pre­dicts it. This ex­er­cise prop­erly demon­strates that while the the­ory is true in a gen­eral sense, broad in­fer­ences based on a sim­plis­tic model of it are not ap­pro­pri­ate.

• But evolu­tion re­ally does make ho­mo­sex­u­al­ity less likely to oc­cur. If given a set of biolog­i­cal state­ments like “some an­i­mals are ho­mo­sex­ual” to­gether with the the­ory of evolu­tion, you will be able to get many more true/​false la­bel­ings cor­rect than if you did not have the the­ory of evolu­tion. Sure, you’ll get that one wrong, but you’ll still get a lot more right than you oth­er­wise would. (I read part of a book, in fact, whose ti­tle I can’t re­mem­ber al­though I just tried awhile to look it up, about evolu­tion, from a pro­fes­sor who teaches evolu­tion, and the the­sis was that armed only with the the­ory of evolu­tion, you can cor­rectly an­swer a large num­ber of biolog­i­cal ques­tions with­out know­ing any­thing about the species in­volved.)

With com­plex the­o­ries and com­plex truths, you get statis­ti­cal pre­dic­tive value, rather than perfec­tion. That doesn’t mean that test­ing your the­o­ries on real data (the ba­sic idea be­hind this post) is a bad thing! It just means you need a larger data set.

• Also: “the hu­man eye sees ob­jects in in­cred­ible de­tail, but a third of peo­ple’s eyes can’t effec­tively see stuff when it’s a few feet away”. Wtf.

Any­one got any in­sight about eyes or ho­mos?

• AFAIK, my­opia seems to be caused, at least in part, by spend­ing a lot of time fo­cus­ing on close ob­jects (such as books, com­puter screens, black­boards, walls, etc.); it’s the re­sult of an­other mis­match be­tween the en­vi­ron­ment we live in and our genes. (Although it’s fairly eas­ily cor­rected, so there’s not re­ally any se­lec­tion pres­sure against it these days.)

• Ac­cord­ing to the stud­ies refer­enced by the Wikipe­dia ar­ti­cle, this is dis­puted and even if true would be, at most, a con­tribut­ing fac­tor ac­tive only in some of the cases. Even with no “near-work” many peo­ple would be my­opic.

• Ac­cord­ing to the WP ar­ti­cle’s sec­tion on epi­demiol­ogy, pos­si­bly more than half of all peo­ple have a very weak form of my­opia (0.5 to 1 diopters). The gen­eral amount of prevalence (as much as a third of pop­u­la­tion for sig­nifi­cant my­opia) is much big­ger than could be ex­plained solely by the pro­posed cor­re­la­tions (ge­netic or en­vi­ron­men­tal).

To me this high prevalence and smooth dis­tri­bu­tion (in de­gree of my­opia) sug­gests that it should just be treated as a weak­ness or a dis­ease. We shouldn’t act sur­prised that such ex­ist. It doesn’t even mean that it’s not se­lected against, as CronoDAS sug­gested (it would only be true within the last 50-100 years). Just that the se­lec­tion isn’t strong enough and hasn’t been go­ing on long enough to elimi­nate my­opia. (With 30-50% prevalence, it would take quite strong se­lec­tion effects.)

Why are you sur­prised that such defects ex­ist? The av­er­age hu­man body has lots of var­i­ous defects. Com­pare: “many hu­mans are phys­i­cally in­ca­pable of the ex­er­tions re­quired by the life of a pro­fes­sional Ro­man-era sol­dier, and couldn’t be trained for it no mat­ter how much they tried.”

Maybe we should be sur­prised that so few defects ex­ist, or maybe we shouldn’t be sur­prised at all—how can you tell?

• The two fac­tors this sug­gests to me, over that time pe­riod, are “in­crease in TV watch­ing among young chil­dren” and “change in diet to­ward highly pro­cessed foods high in car­bo­hy­drates”. This hy­poth­e­sis would also pre­dict the find­ing that my­opia in­creased faster among blacks than among whites, since these two fac­tors have been stronger in poorer ur­ban ar­eas than in wealthier or more ru­ral ones.

Hy­pothe­ses aside, good find!

• change in diet to­ward highly pro­cessed foods high in carbohydrates

Has this hap­pened since 1970?

(The ar­ti­cle sug­gests “com­put­ers and hand­held de­vices.”)

• It didn’t be­gin then, but it cer­tainly con­tinued to shift in that di­rec­tion. IIRC from The Om­nivore’s Dilemma, it was un­der Nixon that mas­sive corn sub­sidies be­gan and vast corn sur­pluses be­came the norm, which led to a frenzy of new, cheap high-fruc­tose-corn-syrup-based prod­ucts as well as the use of corn for cow feed (which, since cows can’t di­gest corn effec­tively, led to a whole ar­ray of an­tibiotics and ad­di­tives as the cheap solu­tion).

Up­shot: I’d ex­pect that the diet changes in the 1970s through 1990s were quite sub­stan­tial, that e.g. so­das be­came even cheaper and more ubiquitous, etc.

• The sur­prise is that an in­cred­ibly highly se­lec­tion-op­ti­mized trait isn’t se­lec­tion-op­ti­mized to work at all in a sur­pris­ing frac­tion of peo­ple (in­clud­ing my­self). So many bits of op­ti­miza­tion pres­sure ex­erted, only to choke on the last few.

• Well then it’s not all that highly se­lec­tion-op­ti­mized. The re­al­ity is that many peo­ple do have poor eye­sight and they do sur­vive and re­pro­duce. Why do you ex­pect stronger se­lec­tion than is in fact the case?

• Look, for thou­sands of gen­er­a­tions, nat­u­ral se­lec­tion ap­plied its limited quan­tity of op­ti­miza­tion pres­sure to­ward re­fin­ing the eye. But now it’s at a point where nat­u­ral se­lec­tion only needs a few more bits of op­ti­miza­tion to effect a huge vi­sion im­prove­ment by turn­ing a great-but-bro­ken eye into a great eye.

The fact that most peo­ple have fan­tas­tic vi­sion shows that this trait is high util­ity for nat­u­ral se­lec­tion to op­ti­mize. So it’s as­tound­ing that nat­u­ral se­lec­tion doesn’t think it’s worth se­lect­ing for work­ing fan­tas­tic eyes over bro­ken fan­tas­tic eyes, when that se­lec­tion only takes a few bits to make. Nat­u­ral se­lec­tion has already proved its will­ing­ness to spend way more bits on way less profound vi­sion im­rove­ments, get it?

As Eliezer pointed out, the mod­ern prevalence of bad vi­sion is prob­a­bly due to de­vel­op­men­tal fac­tors spe­cific to the mod­ern world.

• Just be­cause you can imag­ine a bet­ter eye, doesn’t mean that evolu­tion will se­lect for it. Evolu­tion only se­lects for things that help the or­ganisms it’s act­ing on pro­duce chil­dren and grand­chil­dren, and it seems at least plau­si­ble to me that perfect eye­sight isn’t in that cat­e­gory, in hu­mans. Even be­fore we in­vented glasses, liv­ing in groups would have al­lowed us to as­sign the in­di­vi­d­u­als with the best eye­sight to do the tasks that re­quired it, leav­ing those with a ten­dency to­ward near­sight­ed­ness to do less de­mand­ing tasks and still con­tribute to the tribe and win mates. In fact, in such a sce­nario it may even be plau­si­ble for near­sight­ed­ness to be se­lected for: It seems to me that some­one as­signed to fish­ing or plant­ing would be less likely to be eaten by a tiger than some­one as­signed to hunt­ing.

• First of all I’m not “imag­in­ing a bet­ter eye”; by “fan­tas­tic eye” I mean the eye that nat­u­ral se­lec­tion spent 10,000 bits of op­ti­miza­tion to cre­ate. Nat­u­ral se­lec­tion spent 10,000 bits for 10 units of eye good­ness, then left 13 of us with a 5 bit op­ti­miza­tion short­age that re­duces our eye good­ness by 3 units.

So I’m say­ing, if nat­u­ral se­lec­tion thought a unit of eye good­ness is worth 1,000 bits, up to 10 units, why in mod­ern hu­mans doesn’t it pur­chase 3 whole units for only 5 bits—the same 3 units it pre­vi­ously pur­chased for 3333 bits?

I am aware of your gen­eral point that nat­u­ral se­lec­tion doesn’t always evolve things to­ward cool en­g­ineer­ing ac­com­plish­ments, but your just-so story about po­ten­tial ad­van­tages of near­sight­ed­ness doesn’t re­duce my sur­prise.

Your strength as a ra­tio­nal­ist is to be more con­fused by fic­tion than by re­al­ity. Mak­ing up a story to ex­plain the facts in ret­ro­spect is not a re­li­able al­gorithm for guess­ing the causal struc­ture of eye-good­ness and its con­se­quences. So don’t in­crease the pos­te­rior prob­a­bil­ity of ob­serv­ing the data as if your story is ev­i­dence for it—stay con­fused.

• So I’m say­ing, if nat­u­ral se­lec­tion thought a unit of eye good­ness is worth 1,000 bits, up to 10 units, why in mod­ern hu­mans doesn’t it pur­chase 3 whole units for only 5 bits—the same 3 units it pre­vi­ously pur­chased for 3333 bits?

Per­haps, in the cur­rent en­vi­ron­ment, those 3 units aren’t worth 5 bits, even though at one point they were worth 3,333 bits. (Evolu­tion thor­oughly ig­nores the sunk cost fal­lacy.)

This sug­ges­tion doesn’t pre­clude other hy­pothe­ses; in fact, I’m not even in­tend­ing to sug­gest that it’s a par­tic­u­larly likely sce­nario—hence my use of the word plau­si­ble rather than any­thing more en­thu­si­as­tic. But it is a plau­si­ble one, which you ap­peared to be vi­gor­ously deny­ing was even pos­si­ble ear­lier. Dis­re­gard­ing hy­pothe­ses for no good rea­son isn’t par­tic­u­larly good ra­tio­nal­ity, ei­ther.

• 7 Oct 2013 7:24 UTC
0 points
Parent

Why are you sur­prised that such defects ex­ist?

A pri­ori, I wouldn’t have ex­pected such a high-re­s­olu­tion retina to evolve in the first place, if the lens in front of it wouldn’t have al­lowed one to take full ad­van­tage of it any­way. So I would have ex­pected the re­solv­ing power of the lens to roughly match the re­s­olu­tion of the retina. (Well, over­sam­pling can pre­vent moiré effects, but how likely was that to be an is­sue in the EEA?)

• That may be diet, not evolu­tion­ary equil­ibrium.

• I like the cute­ness of turn­ing an old par­lor game into a the­ory-test. But I sus­pect a more di­rect and effec­tive test would be to take one true fact, in­vert it, and then ask your test sub­ject which state­ment fits their the­ory bet­ter. (I always try to do that to my­self when I’m fit­ting my own pet the­ory to a new fact I’ve just heard, but it’s hard once I already know which one is true.)

Other ad­van­tages of this test over the origi­nal one pro­posed in the post: (1) You don’t have to go to the trou­ble of think­ing up fake data (a prob­le­matic en­deavor, be­cause there is some art to com­ing up with a re­al­is­tic-sound­ing false fact—and also be­cause you ac­tu­ally have to do some re­search to make sure that you didn’t gen­er­ate a true fact by ac­ci­dent). (2) Your test sub­ject only has a 1 in 2 shot at guess­ing right by chance, as op­posed to a 2 in 3 shot.

• I think you over­sell the use­ful­ness of this test, both be­cause of how hard it is to make pre­dic­tions about un­re­peat­able “ex­per­i­ments” that don’t in­clude value-judg­ments and be­cause of how easy it is to game the state­ments—imag­ine:

(A) the false state­ment to be se­lected to be false for ex­tra­ne­ous rea­sons and (B) for the pro­po­nent of the Big Idea to ar­gue (A) when it isn’t true.

Let’s say my friend and I are do­ing this test. His Big Idea is sig­nal­ing; my task is to con­struct three state­ments.

1) Men who want to mate spend a lot of money. (Sig­nal­ing re­sources!) 2) Women who want to mate vol­un­teer. (Sig­nal­ing nur­tur­ing!) 3) Chil­dren of­ten share with each other, un­prompted, while young. (Sig­nal­ing co­op­er­a­tion to par­ents!)

Well, ob­vi­ously #3 isn’t right be­cause of other con­cerns—it turns out com­pet­ing for, and hoard­ing, re­sources has been evolu­tion­ar­ily more suc­cess­ful than sig­nal­ing so­cial fit­ness. Does that mean sig­nal­ing as an idea isn’t use­ful? No; it wrongly ex­plained (3) for a valid rea­son. (3) is false for rea­sons un­re­lated to sig­nal­ing.

• Psy­chohis­to­rian doesn’t say the idea isn’t use­ful, just that re­li­ance on it is in­cor­rect. If the the­ory is “peo­ple mostly do stuff be­cause of sig­nal­ling”, hon­estly, that’s a pretty crappy the­ory. Once Sig­nal­ling Guy fails this test, he should take that as a sign to go back and re­fine the the­ory, per­haps to

“Peo­ple do stuff be­cause of sig­nal­ling when the benefit of the sig­nal, in the en­vi­ron­ment of evolu­tion­ary adap­ta­tion, was worth more than its cost.”

This means that mak­ing pre­dic­tions re­quires es­ti­mat­ing the cost and benefit of the be­hav­ior in ad­vance, which re­quires a lot more data and com­pu­ta­tion, but that’s what makes the the­ory a use­ful pre­dic­tor in­stead of just an­other bo­gus Big Idea.

Not to point fingers at Freako­nomics fans (not least be­cause I’m guilty of this my­self in party con­ver­sa­tion) but it’s real easy to look at a be­hav­ior that doesn’t seem to make sense oth­er­wise and say “oh, duh, sig­nal­ling”. The key is that the be­hav­ior doesn’t make sense oth­er­wise: it’s costly, and that’s an in­di­ca­tion that, if peo­ple are do­ing it, there’s a benefit you’re not see­ing. That tech­nique may be helpful for ex­plain­ing, but it’s not helpful for pre­dict­ing since, as you pointed out, it can ex­plain any­thing if there’s not enough cost/​benefit in­for­ma­tion to rule it out.

• it’s real easy to look at a be­hav­ior that doesn’t seem to make sense oth­er­wise and say “oh, duh, sig­nal­ling”. The key is that the be­hav­ior doesn’t make sense oth­er­wise: it’s costly, and that’s an in­di­ca­tion that, if peo­ple are do­ing it, there’s a benefit you’re not see­ing.

Peo­ple do all sorts of in­sane things for rea­sons other than sig­nal­ing, though. Like be­cause their par­ents did it, or be­cause the be­hav­ior was re­warded at some point.

Of course, sig­nal­ing be­hav­ior is of­ten re­warded, due to it be­ing suc­cess­ful sig­nal­ing… which means it might be more ac­cu­rate to say that peo­ple do things be­cause they’ve been re­warded at some point for do­ing them, and it just so hap­pens that sig­nal­ing be­hav­ior is of­ten re­warded.

(Which is just the sort of de­tail we would want to see from a good the­ory of sig­nal­ing—or any­thing else about hu­man be­hav­ior.)

Un­for­tu­nately, the search for a Big Idea in hu­man be­hav­ior is kind of dan­ger­ous. Not just be­cause a big-enough idea gets close to be­ing tau­tolog­i­cal, but also be­cause it’s a bad idea to as­sume that peo­ple are sane or do things for sane rea­sons!

If you view peo­ple as stupid robots that latch onto and imi­tate the first pat­terns they see that pro­duce some sort of re­ward (as well as freez­ing out any­thing that pro­duces pain early on) and then stub­bornly re­fus­ing to change de­spite all rea­son, then that’s definitely a Big Idea enough to ex­plain nearly ev­ery­thing im­por­tant about hu­man be­hav­ior.

We just don’t like that idea be­cause it’s not beau­tiful and el­e­gant, the way Big Ideas like evolu­tion and rel­a­tivity are.

(It’s also not the sort of idea we’re look­ing for, be­cause we want Big Ideas about psy­chol­ogy to help us by­pass any need to un­der­stand in­di­vi­d­ual hu­man be­ings and their tor­tured his­to­ries, or even look at what their cur­rent pro­gram­ming is. Un­for­tu­nately, this is like ex­pect­ing a The­ory of Com­put­ing to let us equally pre­dict ob­scure prob­lems in Vista and OS X, with­out ever look­ing at their source code or de­vel­op­ment his­tory of ei­ther one.)

• (It’s also not the sort of idea we’re look­ing for, be­cause we want Big Ideas about psy­chol­ogy to help us by­pass any need to un­der­stand in­di­vi­d­ual hu­man be­ings and their tor­tured his­to­ries, or even look at what their cur­rent pro­gram­ming is. Un­for­tu­nately, this is like ex­pect­ing a The­ory of Com­put­ing to let us equally pre­dict ob­scure prob­lems in Vista and OS X, with­out ever look­ing at their source code or de­vel­op­ment his­tory of ei­ther one.)

So do you think e.g. over­com­ing akra­sia ne­ces­si­tates un­der­stand­ing your self-pro­gram­ming via a set of de­cent al­gorithms for do­ing so (e.g. what Less Wrong is for epistemic ra­tio­nal­ity) that al­low you to figure out for your­self what­ever prob­lems you may have? That would be a lit­tle wor­ry­ing in­so­far as some­thing like akra­sia might be similar to a blue screen of death in your The­ory of Com­put­ing ex­am­ple: a com­mon failure mode re­sult­ing from any num­ber of differ­ent prob­lems that can only be re­solved by the ap­pli­ca­tion of high-level learned al­gorithms that most peo­ple sim­ply don’t have and never bother to find, and those who do find are un­able to suc­cinctly ex­press in such a way as to be memet­i­cally fit.

On top of that, similar to how most peo­ple never no­tice that they’re hor­rible epistemic ra­tio­nal­ists and that there is a higher stan­dard to which they could as­pire, most good epistemic ra­tio­nal­ists them­selves may at least no­tice that they’re sub-par along many di­men­sions of in­stru­men­tal ra­tio­nal­ity and yet com­pletely fail to be mo­ti­vated to do any­thing about it: they pride them­selves on be­ing cor­rect, not be­ing suc­cess­ful, in the same way most peo­ple pride them­selves on their suc­cess and not their cor­rect­ness (by ger­ry­man­der­ing their defi­ni­tion of cor­rect­ness to be suc­cess like ra­tio­nal­ists may ger­ry­man­der their defi­ni­tion of suc­cess to be cor­rect­ness, re­sult­ing in both of them los­ing by ei­ther suc­ceed­ing at the wrong things or failing to suc­ceed at the right things).

• So do you think e.g. over­com­ing akra­sia ne­ces­si­tates un­der­stand­ing your self-pro­gram­ming via a set of de­cent al­gorithms for do­ing so (e.g. what Less Wrong is for epistemic ra­tio­nal­ity) that al­low you to figure out for your­self what­ever prob­lems you may have?

Yes; see here for why.

Btw, it would be more ac­cu­rate to speak of “akrasias” as in­di­vi­d­ual oc­cur­rences, rather than “akra­sia” as a non-countable. One can over­come an akra­sia, but not “akra­sia” in some gen­eral sense.

they pride them­selves on be­ing cor­rect, not be­ing successful

Yep, ma­jor failure mode. Been there, done that. ;-)

• Btw, it would be more ac­cu­rate to speak of “akrasias” as in­di­vi­d­ual oc­cur­rences, rather than “akra­sia” as a non-countable. One can over­come an akra­sia, but not “akra­sia” in some gen­eral sense.

I bet you think the war on ter­ror is a badly framed con­cept.

• I’d like to see this ex­panded into a post.

• Sig­nal­ing be­hav­ior is of­ten re­warded, due to it be­ing suc­cess­ful sig­nal­ing… which means it might be more ac­cu­rate to say that peo­ple do things be­cause they’ve been re­warded at some point for do­ing them, and it just so hap­pens that sig­nal­ing be­hav­ior is of­ten re­warded.

The evolu­tion­ary/​sig­nal­ing ex­pla­na­tion is dis­tinct from the re­wards/​con­di­tion­ing ex­pla­na­tion, be­cause the former says that peo­ple are pre­dis­posed to en­gage in be­hav­iors that were good sig­nal­ing in the an­ces­tral en­vi­ron­ment whether or not they are re­warded to­day.

• The evolu­tion­ary/​sig­nal­ing ex­pla­na­tion is dis­tinct from the re­wards/​con­di­tion­ing ex­pla­na­tion, be­cause the former says that peo­ple are pre­dis­posed to en­gage in be­hav­iors that were good sig­nal­ing in the an­ces­tral en­vi­ron­ment whether or not they are re­warded to­day.

As a prac­ti­cal mat­ter of evolu­tion, sig­nal-de­tec­tion has to evolve be­fore sig­nal-gen­er­a­tion, or there’s no benefit to gen­er­at­ing the sig­nal. And evolu­tion likes to reuse ex­ist­ing ma­chin­ery, e.g. re­in­force­ment.

In prac­tice, hu­man be­ings also seem to have some sort of “so­ciome­ter” or “how other peo­ple prob­a­bly see me”, so sig­nal­ing be­hav­ior can be re­in­forc­ing even with­out oth­ers’ di­rect in­ter­ac­tion.

It’s very un­par­si­mo­nious to as­sume that spe­cific hu­man sig­nal­ing be­hav­iors are in­born, given that there are such an in­cred­ible num­ber of such be­hav­iors in use. Much eas­ier to as­sume that sig­nal de­tec­tion and self-re­flec­tion add up to stan­dard re­in­force­ment, as sig­nal-de­tec­tion and self-re­flec­tion are in­de­pen­dently use­ful, while stan­dalone sig­nal­ing be­hav­iors are not.

• As a prac­ti­cal mat­ter of evolu­tion, sig­nal-de­tec­tion has to evolve be­fore sig­nal-gen­er­a­tion, or there’s no benefit to gen­er­at­ing the signal

Er?
This seems to pre­clude cases where pre-ex­ist­ing be­hav­iors are co-opted as sig­nals.
Did you mean to pre­clude such cases?

• This seems to pre­clude cases where pre-ex­ist­ing be­hav­iors are co-opted as sig­nals. Did you mean to pre­clude such cases?

Bleah. I no­tice that I am con­fused. Or at least, con­fus­ing. ;-)

What I was try­ing to say was that there’s no rea­son to fake (or en­hance) a char­ac­ter­is­tic or be­hav­ior un­til af­ter it’s be­ing eval­u­ated by oth­ers. So the evolu­tion­ary pro­cess is:

1. There’s some differ­ence be­tween in­di­vi­d­u­als that pro­vides use­ful information

2. A de­tec­tor evolves to ex­ploit this information

3. Selec­tion pres­sure causes fak­ing of the signal

This pro­cess is also re­peated in memetic form, as well as ge­netic form. Peo­ple do a be­hav­ior for some rea­son, peo­ple learn to use it to eval­u­ate, and then other peo­ple learn to game the sig­nal.

• Ah, gotcha. Yes, that makes sense.

• It is very un­par­si­mo­nious to as­sume that spe­cific hu­man sig­nal­ing be­hav­iors are in­born, given that there are such an in­cred­ible num­ber of such be­hav­iors in use.

I agree that the vast ma­jor­ity of spe­cific hu­man be­hav­iors, sig­nal­ing or oth­er­wise are learned, not in-born, as an Oc­cam prior would sug­gest. That does not, how­ever, mean that all sig­nal­ing be­hav­iors are learned. Many an­i­mals have in­stinc­tual mat­ing rit­u­als, and it would be quite sur­pris­ing if the evolu­tion­ary pres­sures that en­able these to de­velop in other species were en­tirely ab­sent in hu­mans.

Much eas­ier to as­sume that sig­nal de­tec­tion and self-re­flec­tion add up to stan­dard re­in­force­ment, as sig­nal-de­tec­tion and self-re­flec­tion are in­de­pen­dently use­ful, while stan­dalone sig­nal­ing be­hav­iors are not.

I would ex­pect sig­nal­ing to show up both in re­in­forced be­hav­iors and in the re­wards them­selves (the feel­ing of hav­ing sig­naled a given trait could feel re­ward­ing). Again, most are prob­a­bly be­hav­iors that have been re­warded or learned memet­i­cally, but given the large and di­verse sig­nal­ing be­hav­iors, the more com­plex ex­pla­na­tion prob­a­bly ap­plies to some (but not most) of them.

• Peo­ple do all sorts of in­sane things for rea­sons other than sig­nal­ing, though. Like be­cause their par­ents did it, or be­cause the be­hav­ior was re­warded at some point.

Minor quib­ble: the con­scious rea­sons for some­one’s ac­tions may not be sig­nal­ing, but that may be lit­tle more than a ra­tio­nal­iza­tion for an un­con­sciously mo­ti­vated at­tempt to sig­nal some qual­ity. Mat­ing is filled with such sig­nal­ling. While most peo­ple prob­a­bly have some vague idea about send­ing the right sig­nals to the op­po­site (or same) sex, few peo­ple re­al­ize that they are sub­con­sciously send­ing and re­spond­ing to sig­nals. All they no­tice are their feel­ings.

• Minor quib­ble: the con­scious rea­sons for some­one’s ac­tions may not be sig­nal­ing, but that may be lit­tle more than a ra­tio­nal­iza­tion for an un­con­sciously mo­ti­vated at­tempt to sig­nal some qual­ity.

If you read the rest of the com­ment to which you are re­ply­ing, I pointed out that it’s effec­tively best to as­sume that no­body knows why they’re do­ing any­thing, and that we’re sim­ply do­ing what’s been re­warded.

That some of those things that are re­warded can be classed as “sig­nal­ing”, may ac­tu­ally have less to do (evolu­tion­ar­ily) with the per­son ex­hibit­ing the be­hav­ior, and more to do with the per­son(s) re­ward­ing or demon­strat­ing those be­hav­iors.

IOW, we may not have an in­stinct to “sig­nal”, but only to imi­tate what we see oth­ers re­spond­ing to, and do more of what gives ap­pro­pri­ate re­sponses. That would al­low our mo­ti­va­tion to be far less con­scious, for one thing.

(Some­what-un­re­lated point: the most an­noy­ing thing about try­ing to study hu­man mo­ti­va­tion is the im­plicit as­sump­tion we have that peo­ple should know why they do things. But when viewed from an ev. psych per­spec­tive, it makes more sense to ask why is there any rea­son for us to know any­thing about our own mo­ti­va­tions at all? We don’t ex­pect other an­i­mals to have in­sight into their own mo­ti­va­tion, so why would we ex­pect that, at 5% differ­ence from a chim­panzee, we should au­to­mat­i­cally know ev­ery­thing about our own mo­ti­va­tions? It’s ab­surd.)

• I’m not sure that the class of all ac­tions that are mo­ti­vated by sig­nal­ing is the same as (or a sub­set of) the class of all ac­tions that are re­warded. At least, if by re­warded, you mean some­thing other than the re­wards of plea­sure and pain that the brain gives.

• This ac­tivity doesn’t sound ter­ribly promis­ing to me, but it DOES sound like the sort of thing that is easy enough to try that peo­ple should try it rather than just crit­i­ciz­ing it. One of the great in­sights of the En­light­en­ment is that math-guys are more averse to do­ing ac­tual ex­per­i­ments to test their ideas than they should be if they are try­ing to un­der­stand the world and thus to win.

Points for pjeby’s com­ment once again, by the way.

• Merely stylis­tic, I think, but I’d avoid “It has been said that...”. Aside from be­ing in­ap­pro­pri­ate pas­sive voice (Who said it?), it has that weird feel of in­vok­ing an­cient wis­dom. That’s only cute when Eliezer does it.

• I don’t think it’s any­where as easy or as effec­tive as you seem to sug­gest. In the land of the­o­ries so in­for­mal that there is a non­triv­ial ques­tion of whether they are triv­ial, one can get a hint of which state­ments are cor­rect by other means and then ra­tio­nal­ize these hints us­ing the Great The­ory. If it’s not pos­si­ble to tell which state­ments are cor­rect, one can ar­gue that the ques­tions are too tough, and any the­ory has limits to its ap­pli­ca­tion.

• Try this with Nat­u­ral Selec­tion and you will find that it can ex­plain just about any an­i­mal be­hav­ior, even fake an­i­mal be­hav­ior. What should be the take away les­son from this?

• . . .even fake an­i­mal be­hav­ior.

I didn’t un­der­stand Psy­chohis­to­rian’s post as sug­gest­ing that we should make up fic­tional data—for then of course it may be no sur­prise that the given the­ory would have to bend in or­der to ac­com­mo­date it. Rather, we should take real data, which is not ex­plained by the the­ory (but which is un­der­stood in light of some differ­ent the­ory), and see just how eas­ily the ad­vo­cate can stretch his ex­pla­na­tion to ac­com­mo­date it. Does he/​she no­tice the stretch? Can he/​she re­solve the differ­ence be­tween that data from the oth­ers?

What should be the take away les­son from this?

Peo­ple get into man-with-a-ham­mer mode with evolu­tion­ary ex­pla­na­tions. A lot. Be­cause of the na­ture of evolu­tion­ary biol­ogy, some­times they just rea­son like, “I can imag­ine what ad­van­tages this fea­ture could have con­ferred in the past. Thus, …”. And yes, a lot of the time what you get is ad hoc crap.

• But what if we don’t know which data is ac­tu­ally ex­plained by the the­ory or not? That will make it hard to come up with “real data, which is not ex­plained by the the­ory”.

• Rather, we should take real data...

Not quite. The idea is to see if the the­ory can con­vinc­ingly ex­plain fake data. If it can, it doesn’t mean the the­ory is wrong, it just means your ca­pac­ity to in­fer things from it is limited. Nat­u­ral se­lec­tion is in­ter­est­ing and use­ful, but it is not a re­li­able pre­dic­tor in many cases. You rou­tinely see peo­ple say, “The mar­ket must do X be­cause of Y.” If they could say ba­si­cally the ex­act same thing about ~X, then it’s a fake ex­pla­na­tion; their the­ory re­ally doesn’t tell them what the mar­ket will do. If a the­ory can con­vinc­ingly ex­plain false data, you’ve got to be very cau­tious in us­ing it to make pre­dic­tions or claims about the world.

Con­versely, the­o­ries with ex­tremely high pre­dic­tive power will con­sis­tently pass the test. If you used facts cen­tered in physics or chem­istry, a com­pe­tent test-taker should always spot the false data, be­cause our the­o­ries in physics and chem­istry mostly have ex­tremely pre­cise pre­dic­tive power.

• That one ought not to at­tempt ut­terly un­sup­ported nat­u­ral-se­lec­tion-based in­fer­ences in the do­main of an­i­mal be­hav­ior but rather limit them to the do­main of phys­i­cal char­ac­ter­is­tics.

• Do you have any ex­am­ples in mind? It seems to me that only a mi­s­un­der­stand­ing of nat­u­ral se­lec­tion could ex­plain fake an­i­mal be­hav­ior.

• It seems that it is al­most as easy to come up with a Nat­u­ral Selec­tion story which would ex­plain why a bird in a cer­tain en­vi­ron­ment would move slowly, stealthily, and be have feathers that are a similar color as the ground as it is to ex­plain why a bird moves quickly, calls loudly, and is brightly col­ored. It seems that the abil­ity to ex­plain an­i­mal be­hav­ior and phys­i­cal char­ac­ter­is­tics us­ing Nat­u­ral Selec­tion is in large part up to the cre­ativity of the per­son do­ing the ex­plain­ing.

• Birds, which fly, or which de­scend from birds who flew, are of­ten brighly col­ored. An­i­mals that don’t fly are sel­dom brightly col­ored. A ma­jor ex­cep­tion is an­i­mals or in­sects that are poi­sonous. Nat­u­ral se­lec­tion han­dles this pretty well.

• Nat­u­ral se­lec­tion doesn’t ex­plain why or pre­dict that a bird might have detri­men­tal traits such as bright col­or­ing that can be­tray it to preda­tors. Dar­win in­vented a whole other se­lec­tive mechanism to ex­plain the ap­pear­ance of such traits—sex­ual se­lec­tion, later elab­o­rated into the Handi­cap prin­ci­ple. Sex­u­ally se­lected traits are nec­es­sar­ily his­tor­i­cally con­tin­gent, but you can’t just ex­plain away any hered­i­tary hand­i­cap as a product of sex­ual se­lec­tion: the the­ory makes the non­triv­ial pre­dic­tion that mate se­lec­tion will de­pend on such traits.

• Sex­ual se­lec­tion is just a type of nat­u­ral se­lec­tion, not a differ­ent mechanism. Just look at genes and be done with it.

• I wish I could up­vote this com­ment twice.

• Why? I didn’t re­ally feel like try­ing to win over Michael Vas­sar, but since you feel so strongly about it, I should point out that biol­o­gists do find it use­ful to dis­t­in­guish be­tween “ecolog­i­cal se­lec­tion” and “sex­ual se­lec­tion”.

• For an anal­ogy, con­sider the fact that math­e­mat­i­ci­ans also find it use­ful to dis­t­in­guish be­tween “squares” and “rec­t­an­gles”—but they nev­er­the­less cor­rectly in­sist that all squares are in fact rec­t­an­gles.

The prob­lem here isn’t that “sex­ual se­lec­tion” isn’t a use­ful con­cept on its own; the prob­lem is the failure to ap­pre­ci­ate how ab­stract the con­cept of “nat­u­ral se­lec­tion” is.

I have a similar feel­ing, ul­ti­mately, about the op­po­si­tion be­tween “nat­u­ral se­lec­tion” and “ar­tifi­cial se­lec­tion”, even though that con­trast is per­haps more ped­a­gog­i­cally use­ful.

• The prob­lem here isn’t that “sex­ual se­lec­tion” isn’t a use­ful con­cept on its own; the prob­lem is the failure to ap­pre­ci­ate how ab­stract the con­cept of “nat­u­ral se­lec­tion” is.

I think there’s a sub­stan­tive dis­pute here, not merely se­man­tics. The origi­nal com­plaint was that Nat­u­ral Selec­tion was an un­con­strained the­ory; the point of my com­ment was that in spe­cific cases, the ac­tual op­er­at­ing se­lec­tive mechanisms obey spe­cific con­straints. The more ab­stract a con­cept is (in OO terms, the higher in the class hi­er­ar­chy), the less con­straints it obeys. Say­ing that nat­u­ral se­lec­tion is an ab­stract con­cept that en­com­passes a va­ri­ety of spe­cific mechanisms is all well and good, but you can’t in­stan­ti­ate an ab­stract class.

• Sex­u­ally se­lected traits are nec­es­sar­ily his­tor­i­cally con­tin­gent, but you can’t just ex­plain away any hered­i­tary hand­i­cap as a product of sex­ual se­lec­tion: the the­ory makes the non­triv­ial pre­dic­tion that mate se­lec­tion will de­pend on such traits.

Hmm. Gen­er­al­iza­tion: a the­ory that con­cen­trates prob­a­bil­ity mass in a high-di­men­sional space might not do so in a lower-di­men­sional pro­jec­tion. This seems im­por­tant, but maybe only be­cause I find false claims of non­falsifi­a­bil­ity/​lack of pre­dic­tive power very an­noy­ing.

• I’m hav­ing trou­ble see­ing the re­la­tion be­tween your com­ment and mine, but I’m in­trigued and wish to sub­scribe to your newslet­ter would like to see it spel­led out a bit.

• I ex­pect that, in prac­tice, the the­ory hold­ers will de­mure to offer an­swers. They will in­stead say, “Those aren’t the kinds of ques­tions that my the­ory is de­signed to an­swer. My the­ory an­swers ques­tions like . . . ”. They will then sug­gest ques­tions that are ei­ther vague, im­pos­si­ble to an­swer em­piri­cally, or whose an­swers they know.

That’s my the­ory, and I’m sticken’ to it :).

• Well, at least you know you can start ig­nor­ing them if they say that …

• this is why I am not a liber­tar­ian. I be­lieve in liberty in gen­eral, but it is just one more tool to get­ting what we want and should be treated ac­cord­ingly.

but you need liberty to have wants

no. thou art physics re­mem­ber? pos­tu­lat­ing mag­i­cal prefer­ences that ma­te­ri­al­ize from the ether is silly. your wants are shaped by forces be­yond your con­trol. balk­ing at a few more re­stric­tions in cases where there is clearly a net good is silly.

• No, you shouldn’t make broad in­fer­ences about hu­man be­havi­our with­out any data be­cause they are con­sis­tent with evolu­tion, un­less your ap­pli­ca­tion of the the­ory of evolu­tion is so pre­cise and well-in­formed that you can con­sis­tently pass the Two-Truths-and-a-Lie Test.

This sen­tence could have benefi­cially ended af­ter “be­hav­ior,” “data,” or “evolu­tion.” The last clause seems to be beg­ging the ques­tion—why am I as­sum­ing the Two-Truths-and-a-Lie Test is so valuable? Shouldn’t the test it­self be put to some kind of test to prove its worth?

• I think that it should be tested on our cur­rently known the­o­ries, but I do think it will prob­a­bly perform quite well. This is on the ba­sis that its analog­i­cally similar to cross val­i­da­tion in the way that Oc­cam’s Ra­zor is similar to the in­for­ma­tion crite­ria (Aikake, Bayes, Min­i­mum De­scrip­tion Length, etc.) used in statis­tics.

I think that, in some sense, its the port­ing over of a statis­ti­cal idea to the eval­u­a­tion of gen­eral hy­pothe­ses.

• OK, so my fa­vorite man-with-a-ham­mer du jour is the “ev­ery­one does ev­ery­thing for self­ish rea­sons” view of the world. If you give money to char­ity, you do it for the fuzzy feel­ing, not be­cause you are al­tru­is­tic.

What would you pro­pose as the three fac­tual claims to test this? I’m hav­ing a hard time figur­ing any that would be a use­ful dis­crim­i­nant.

Think­ing about this a bit, it seems most use­ful to as­sert nega­tive fac­tual claims, ie: “X never hap­pens”.

• OK, so my fa­vorite man-with-a-ham­mer du jour is the “ev­ery­one does ev­ery­thing for self­ish rea­sons” view of the world. If you give money to char­ity, you do it for the fuzzy feel­ing, not be­cause you are al­tru­is­tic.

That’s not a dis­agree­ment about the na­ture of the world, it’s a dis­agree­ment about the mean­ing of the word “al­tru­is­tic”.

• A mod­ifi­ca­tion for this game: come up with a list of 3 (or more) propo­si­tions, which are ran­domly false/​true. This way the pre­dic­tive the­ory can fail or suc­ceed in a more ob­vi­ous way.

• I think this is cross-val­i­da­tion for tests. There have been sev­eral posts on Oc­cam’s Ra­zor as a way to find cor­rect the­o­ries, but this is the first I have seen on cross-val­i­da­tion.

In ma­chine learn­ing and statis­tics, a re­searcher of­ten is try­ing to find a good pre­dic­tor for some data and they of­ten have some “train­ing data” on which they can use to se­lect the pre­dic­tor from a class of po­ten­tial pre­dic­tors. Often one has more than one pre­dic­tor that performs well on the train­ing data so the ques­tion is how else can one choose an ap­pro­pri­ate pre­dic­tor.

One way to han­dle the prob­lem is to use only a class of “sim­ple pre­dic­tors” (I’m fudg­ing de­tails!) and then use the best one: that’s Oc­cam’s ra­zor. The­o­rists like this ap­proach and usu­ally at­tach the word “in­for­ma­tion” to it. The other “prac­ti­tioner” ap­proach is use a big­ger class of pre­dic­tors where you tune some of the pa­ram­e­ters on one part of the data and tune other pa­ram­e­ters (of­ten hy­per-pa­ram­e­ters if you know the jar­gon) on a sep­a­rate part of the data. That’s the cross-val­i­da­tion ap­proach.

There’s some re­sults on the asymp­totic equiv­alence of the two ap­proaches. But, what’s cool about this post is that I think it offers a way to ap­ply cross-val­i­da­tion to an area where I have never heard it dis­cussed (I think, in part, be­cause its the method of the prac­ti­tioner and not so much the the­o­rist—there are ex­cep­tions of course!)

• This re­minds me of David Deutsch’s talk “A new way to ex­plain ex­pla­na­tion” (oddly enough also re­posted here)