# Why Bayes? A Wise Ruling

Why is Bayes’ Rule use­ful? Most ex­pla­na­tions of Bayes ex­plain the how of Bayes: they take a well-posed math­e­mat­i­cal prob­lem and con­vert given num­bers to de­sired num­bers. While Bayes is use­ful for calcu­lat­ing hard-to-es­ti­mate num­bers from easy-to-es­ti­mate num­bers, the quan­ti­ta­tive use of Bayes re­quires the qual­i­ta­tive use of Bayes, which is notic­ing that such a prob­lem ex­ists. When you have a hard-to-es­ti­mate num­ber that you could figure out from easy-to-es­ti­mate num­bers, then you want to use Bayes. This men­tal pro­cess of test­ing be­liefs and search­ing for easy ex­per­i­ments is the heart of prac­ti­cal Bayesian think­ing. As an ex­am­ple, let us ex­am­ine 1 Kings 3:16-28:

Now two pros­ti­tutes came to the king and stood be­fore him. One of them said, “Par­don me, my lord. This woman and I live in the same house, and I had a baby while she was there with me. The third day af­ter my child was born, this woman also had a baby. We were alone; there was no one in the house but the two of us.

“Dur­ing the night this woman’s son died be­cause she lay on him. So she got up in the mid­dle of the night and took my son from my side while I your ser­vant was asleep. She put him by her breast and put her dead son by my breast. The next morn­ing, I got up to nurse my son—and he was dead! But when I looked at him closely in the morn­ing light, I saw that it wasn’t the son I had borne.”

The other woman said, “No! The liv­ing one is my son; the dead one is yours.”

But the first one in­sisted, “No! The dead one is yours; the liv­ing one is mine.” And so they ar­gued be­fore the king.

The king said, “This one says, ‘My son is al­ive and your son is dead,’ while that one says, ‘No! Your son is dead and mine is al­ive.’”

No­tice that Solomon ex­plic­itly iden­ti­fied com­pet­ing hy­pothe­ses, rais­ing them to the level of con­scious at­ten­tion. When each hy­poth­e­sis has a per­sonal ad­vo­cate, this is easy, but it is no less im­por­tant when con­sid­er­ing other un­cer­tain­ties. Often, a prob­lem looks clearer when you branch an un­cer­tain vari­able on its pos­si­ble val­ues, even if it is as sim­ple as say­ing “This is ei­ther true or not true.”

Then the king said, “Bring me a sword.” So they brought a sword for the king. He then gave an or­der: “Cut the liv­ing child in two and give half to one and half to the other.”

The woman whose son was al­ive was deeply moved out of love for her son and said to the king, “Please, my lord, give her the liv­ing baby! Don’t kill him!”

But the other said, “Nei­ther I nor you shall have him. Cut him in two!”

Then the king gave his rul­ing: “Give the liv­ing baby to the first woman. Do not kill him; she is his mother.”

Solomon con­sid­ers the em­piri­cal con­se­quences of the com­pet­ing hy­pothe­ses, search­ing for a test which will fa­vor one hy­poth­e­sis over an­other. When con­sid­er­ing one hy­poth­e­sis alone, it is easy to find tests which are likely if that hy­poth­e­sis is true. The true mother is likely to say the child is hers; the true mother is likely to be pas­sion­ate about the is­sue. But that’s not enough; we need to also es­ti­mate how likely those re­sults are if the hy­poth­e­sis is false. The false mother is equally likely to say the child is hers, and could gen­er­ate equal pas­sion. We need a test whose re­sults sig­nifi­cantly de­pend on which hy­poth­e­sis is ac­tu­ally true.

Wit­nesses or DNA tests would be more likely to sup­port the true mother than the false mother, but they aren’t available. Solomon re­al­izes that the claimant’s mo­ti­va­tions are differ­ent, and thus putting the child in dan­ger may cause the true mother and false mother to act differ­ently. The test works, gen­er­ates a large like­li­hood ra­tio, and now his pos­te­rior firmly fa­vors the first claimant as the true mother.

When all Is­rael heard the ver­dict the king had given, they held the king in awe, be­cause they saw that he had wis­dom from God to ad­minister jus­tice.

• It sud­denly oc­curs to me that the first woman is the right choice for rais­ing the child, re­gard­less of who the birth mother is.

I won­der if Solomon had plans in mind if both women had said the same thing.

• I won­der if Solomon had plans in mind if both women had said the same thing.

That’s what the next pair of claimants did, af­ter learn­ing about the case. That time Solomon’s de­ci­sion was not wise enough to be in­cluded in the sa­cred texts: he sold the baby into slav­ery and then promptly ex­e­cuted both claimants. Not sur­pris­ingly, no fur­ther cases like this were brought be­fore the king.

• I won­der if Solomon had plans in mind if both women had said the same thing.

Parch­ment, shears, rock.

• *stone

• *carbuncle

• This is an ex­cel­lent point I should’ve no­ticed my­self (though it’s been long and long since I en­coun­tered the parable). Who says you own a baby just by be­ing its ge­netic mother?

Albeit suffi­ciently young ba­bies are plau­si­bly not sen­tient.

• What defi­ni­tion of “sen­tient” are you us­ing, such that young ba­bies don’t meet it?

• One that also ex­cludes an­i­mals, but in­cludes healthy adult hu­mans.

• Bizarre. In lieu of a re­ply by Eliezer him­self clar­ify­ing things, I am left to un­der­stand he thinks that some por­tion of hu­mans oth­er­wise pos­sess­ing the struc­tural and anatom­i­cal ne­ces­si­ties for sen­sa­tion don’t ex­pe­rience any­thing even when all their sense or­gans are work­ing fine, and that an­i­mals in gen­eral are ba­si­cally just meat-au­tomata with no in­ner life at all. Even when they’re com­mu­ni­cat­ing about those in­ner states and have the same struc­tural cor­re­lates of var­i­ous sen­sa­tions we’d ex­pect to see, and re­act in ways that sure look like ex­pres­sion of sen­sa­tion or emo­tion (even if you some­times need to be fa­mil­iar with their par­tic­u­lar body lan­guage).

That feels a lot more like a straw­man than any­thing, be­cause it’s just so ob­vi­ously bol­locks. If I step on my cat’s tail by mis­take, she doesn’t yowl and run from me be­cause “No­ci­cep­tor ac­ti­va­tion thresh­old met; ini­ti­ate yowl-and-run sub­rou­tine.” She does it be­cause it’s painful and it star­tled her. I know there are peo­ple who hon­estly be­lieve some­thing like that about non­hu­man life across the board, but I hadn’t got­ten the im­pres­sion Eliezer was one.

Some­one clear this up for me?

• Sen­tient vs. Sapi­ent is one of the most com­mon word con­fu­sions in the English lan­guage. If some­one says “sen­tient,” but the con­text ap­pears to sug­gest “sapi­ent,” they prob­a­bly mean sapi­ent.

• The bit about that that’s both­er­ing me is “sapi­ent” is a term of art—it’s sci­ence fic­tion short­hand em­ployed with a pur­pose (it de­notes per­son­hood for the reader, in a field where blatantly-non­hu­man but un­am­bigu­ously-per­son­like en­tities are com­mon). It di­vides the field of hy­po­thet­i­cal en­tities into two neat, clean cat­e­gories: peo­ple no mat­ter what their sub­strate, ap­pear­ance, anatomy or drives, and ev­ery­thing from an­i­mals of ev­ery sort to plants and grains of sand.

It just seems like a weird way of di­vid­ing up the world, and more of a cul­tural arte­fact than any­thing; a marker on the map which cor­re­sponds to noth­ing in the ter­ri­tory.

• Peo­ple of­ten use ‘sen­tient’ to mean ‘sapi­ent’, and it may be that Eliezer in­tends the lat­ter. It’s at least pretty plau­si­ble that an­i­mals and very young in­fants are not sapi­ent, namely not ca­pa­ble of judge­ment, and that this ca­pac­ity is what would en­dow one with a cer­tain au­ton­omy.

• I re­spect­fully dis­agree, sapi­ence is an ac­quired sub­jec­tive qual­ity, there­fore it is triv­ial to dis­re­gard. Now sen­tience is or­ders of mag­ni­tude more com­plex. I was go­ing to say “in­her­ent” to the species, but is it? Now this is sup­posed to be “the easy prob­lem” go figure that.

• “Soul”, gotcha. Bi­nary per­son­hood marker. Reified con­cept not suffi­ciently un­packed. Okie.

• That’s a rather un­char­i­ta­ble mis­in­ter­pre­ta­tion of what hen wrote, caused by anger and frus­tra­tion, I’m guess­ing.

• 1)”No­ci­cep­tor ac­ti­va­tion thresh­old met; ini­ti­ate yowl-and-run sub­rou­tine.” 2)She does it be­cause it’s painful and it star­tled her.

What’s the differ­ence be­tween 1 and 2?

• 1 pre­sumes that min­i­mal­ist de­scrip­tions of su­perfi­cially-visi­ble out­put are all you need to re­con­struct the ac­tual drivers be­hind the be­hav­ior. 2 pre­sumes that the evolu­tion­ar­ily-shared neu­ral ar­chi­tec­ture and its ba­sic com­po­nents of per­cep­tion, cog­ni­tion and soforth are not seper­ated by a bar­rier of mag­i­cal re­al­ity fluid.

• Ah. If you’re say­ing that 1) im­plies lesser in­ter­nal ma­chin­ery than 2), and that the in­ter­nal ma­chin­ery (cog­ni­tion and soforth) is what’s im­por­tant, then I agree.

The prob­lem I think is just that, to me, they both sound to me like perfectly rea­son­able (if vague) de­scrip­tions of com­plex, sen­tient hu­man pain. It seemed like you were say­ing no­ci­cep­tors and sub­rou­tines were in­ca­pable of pro­duc­ing pain and startle­ment.

• The prob­lem I think is just that, to me, they both sound to me like perfectly rea­son­able (if vague) de­scrip­tions of com­plex, sen­tient hu­man pain.

1 sounds to me like an at­tempt to cap­ture out­put in the form of a flowchart. It’s like try­ing to de­scribe the flock­ing be­hav­ior of birds by refer­ence to the Boids cel­lu­lar au­toma­ton—and in­sist­ing not that there are similar prin­ci­ples at work in how the birds go about solv­ing the prob­lem of flock­ing, but that birds liter­ally run an in­stance of Boids in their heads and that’s all there is to their flock­ing be­hav­ior.

• I agree that “Eliezer be­lieves an­i­mals and non­patholog­i­cal in­fants are just meat-au­tomata who don’t ac­tu­ally pos­sess the men­tal states they com­mu­ni­cat­ing about” is a straw­man. I’m not re­ally sure what re­mains to be cleared up. Can you clar­ify the ques­tion?

• Ba­si­cally what I asked Eliezer: What sense of the word “sen­tient” is he us­ing, such that ba­bies plau­si­bly don’t qual­ify? My de facto read of the term and a lit­tle dig­ging around Google show two ba­sic senses:

-Possess­ing sen­sory ex­pe­riences (I’m pretty sure in­sects and even worms do that)
-SF/​F writer’s term for “as­sume this fic­tional en­tity is a per­son” (akin to “sapi­ent”; it’s a bi­nary per­son­hood marker, or a sec­u­larized soul—it tells the reader to re­act ac­cord­ingly to this char­ac­ter’s ex­pe­riences and be­hav­ior)

The lat­ter, ap­plied to the real world, sounds rather more like “soul” than any­thing co­her­ent and ob­vi­ous. The former, de­nied in ba­bies, sounds bizarre and ob­vi­ously un­true. So...I’m miss­ing some­thing, and I’d like to know what it is.

• Maybe the best way to ap­proach this ques­tion is back­wards. I as­sume you be­lieve that peo­ple (at least) have some moral worth such that they ought not be owned, whim­si­cally de­stroyed, etc. I also as­sume you be­lieve that stones (at least) have no moral worth and can be owned, whim­si­cally de­stroyed, etc. with­out any im­me­di­ate moral con­se­quences. So 1) tell me where you think the line is (even if its a very fuzzy, cir­cum­stan­tial one) and 2) tell me in virtue of what some­thing has or lacks such moral worth.

...or 3) toss out my ques­tions and tell me how you think it goes on your own terms.

• I as­sume you be­lieve that peo­ple (at least) have some moral worth such that they ought not be owned, whim­si­cally de­stroyed, etc

Essen­tially. I don’t con­sider it a fact-about-the-world per se, but that cap­tures my alief pretty well.

I also as­sume you be­lieve that stones (at least) have no moral worth and can be owned, whim­si­cally de­stroyed, etc with­out any im­me­di­ate moral con­se­quences.

Eh. Ac­tu­ally I have some squick to cav­a­lier de­struc­tion or dis­rup­tion of inan­i­mate ob­jects, but they don’t reg­ister out as the same thing. So we’ll go with that.

…or 3) toss out my ques­tions and tell me how you think it goes on your own terms.

To what ex­tent does an en­tity re­spond dy­nam­i­cally to both pre­sent and his­tor­i­cal con­di­tions in terms im­pacts on its health, wellbe­ing, emo­tional and per­cep­tual ex­pe­riences, so­cial in­ter­ac­tions and so on? To what ex­tent is it ca­pa­ble of ex­pe­rienc­ing pain and suffer­ing? To what ex­tent does mod­ify­ing my be­hav­ior in re­sponse to these things con­sti­tute a nega­tive bur­den on my­self or oth­ers? To what ex­tent do pre­sent cir­cum­stances bear on all those things?

Those aren’t so much terms in an equa­tion as in­de­pen­dent axes of var­i­ance. There are prob­a­bly some I haven’t listed. They define the shape of the space; the ac­tual an­swer to your ques­tion is lurk­ing some­where in there.

• Thanks, that’s helpful. Given what you’ve said, I doubt you and EY would dis­agree on much. EY says in his metaethics se­quence that moral facts and cat­e­gories like ‘moral worth’ or ‘au­ton­omy’ are de­rived prop­er­ties. In other words, they don’t re­fer to any­thing fun­da­men­tal about the world, but su­per­vene on some com­plex set of fun­da­men­tal facts. Given that that’s his view, I think he was just us­ing ‘sen­tience’ as a short­hand for some­thing like what you’ve writ­ten: note that many of the con­sid­er­a­tions you de­scribe are im­por­tantly re­lated to a ca­pac­ity for com­plex ex­pe­riences.

• note that many of the con­sid­er­a­tions you de­scribe are im­por­tantly re­lated to a ca­pac­ity for com­plex ex­pe­riences.

Ex­cept I’ve in­ter­acted with bugs in ways that satis­fied that crite­rion (and that did parse out as morally-good), so clearly the devil’s in the de­tails. If Eliezer sus­pects young chil­dren may re­li­ably not qual­ify, and I sus­pect that in­sects may at least oc­ca­sion­ally qual­ify, we’re clearly draw­ing very differ­ent lines and have very differ­ent un­der­ly­ing as­sump­tions about re­al­ity.

• I as­sume you be­lieve that peo­ple (at least) have some moral worth such that they ought not be owned, whim­si­cally de­stroyed, etc. I also as­sume you be­lieve that stones (at least) have no moral worth and can be owned, whim­si­cally de­stroyed, etc. with­out any im­me­di­ate moral con­se­quences. So 1) tell me where you think the line is (even if its a very fuzzy, cir­cum­stan­tial one)

What makes you think there’s a line? I care more about kil­ling (or tor­tur­ing) a dog than a stone, but less so than a hu­man. Pul­ling the wings off flies pro­vokes a similar, if weaker, re­ac­tion. A con­tinuum might com­pli­cate the math slightly, but …

• “Self-aware” is one soul-free in­ter­pre­ta­tion of sen­tient/​sapi­ent, of­ten ex­per­i­men­tally mea­sured by the mir­ror test. By that met­ric, hu­mans are not sen­tient un­til well into the sec­ond year, and most species we would con­sider non-sen­tient fail it. Of course, treat­ing non-self-aware hu­man ba­bies as non-sen­tient an­i­mals is quite prob­le­matic. Peter Singer is one of the few peo­ple brave enough to tread into this topic.

• The mir­ror test is in­ter­est­ing for sure, es­pe­cially in a cross-species con­text. How­ever, I’m far from con­vinced about the straight­for­ward read­ing of “the ex­pected re­sponse in­di­cates the sub­ject has an in­ter­nal map of one­self.” Since you read the Wikipe­dia ar­ti­cle down that far, you could also scroll down to the “Crit­i­cisms” sec­tion and see a va­ri­ety of ob­jec­tions to that.

More­over, when asked to choose be­tween the in­ter­pre­ta­tion that the test isn’t suffi­cient for its stated pur­pose, and the in­ter­pre­ta­tion that six-year olds in Fiji aren’t self-aware I rather sus­pect the former is more likely.

Be­sides all that, even if we as­sume self-aware­ness is the thing you seem to be mak­ing of it, I’m not clear how that would draw moral-worth line so neatly be­tween hu­mans (or some hu­mans) and liter­ally ev­ery­thing else. From a con­se­quen­tial­ist per­spec­tive, if I as­sume that dogs or rats can ex­pe­rience pain and suffer­ing, it seems weird to ne­glect them from my util­ity func­tion on the ba­sis they don’t jump through that par­tic­u­lar (am­bigu­ous, method­olog­i­cally-ques­tion­able) ex­per­i­men­tal hoop.

• Oh, I agree that the mir­ror test is quite im­perfect. The prac­ti­cal is­sue is how to draw a Schel­ling some­where sen­si­ble. Clearly mosquitoes can be treated as non-sen­tient, clearly most hu­mans can­not be. Treat­ing hu­man fe­tuses and some mam­mals as non-sen­tient is rather con­tro­ver­sial. Just “ex­pe­rienc­ing pain” is prob­a­bly too wide a net for moral worth, as no­ci­cep­tors are pre­sent in most an­i­mals, in­clud­ing the afore­men­tioned mosquito. Suffer­ing is prob­a­bly a more re­stric­tive term, but I am not aware of a mea­surable defi­ni­tion of it. It is also prob­a­bly some­times too nar­row, as most of us would find it im­moral to harm peo­ple who do not ex­pe­rience suffer­ing due to a men­tal or a phys­i­cal is­sue, like pain in­sen­si­tivity or asym­bo­lia.

• Clearly mosquitoes can be treated as non-sen­tient,

Disagree that it’s clear. I’ve had in­ter­ac­tions with in­sects that I could only parse as “in­ter­ac­tion be­tween two sen­tient be­ings, al­though there’s a wide gulf of ex­pec­ta­tion and sen­sa­tion and emo­tion and so forth which pushes it right up to the edges of that cat­e­gory.” I’ve not had many in­ter­ac­tions with mosquitos be­yond “You try to suck my blood be­cause you’re hun­gry and I’m a warm, CO2-breath­ing blood source in your vicinity”, but I as­sume that there’s some­thing it feels like to be a mosquito, that it has a lit­tle mosquito mind that might not be very flex­ible or im­pres­sive when weighted against a hu­man one, but it’s there, it’s what the mosquito uses to nav­i­gate its en­vi­ron­ment and or­ga­nize its be­hav­ior in­tel­ligibly, and all of its search­ing for mates and blood and a nice place to lay eggs is felt as a drive… that in short it’s not just a tiny lit­tle blood­suck­ing p-zom­bie. That doesn’t mean I ac­cord it much moral weight ei­ther—I won’t shed any tears over it if I should smash it while re­flex­ively brush­ing it aside, even though I’m aware arthro­pods have no­ci­cep­tion and, com­plex ca­pac­ity for emo­tional suffer­ing or not, they still feel pain and I pre­fer not to in­flict that need­lessly (or with­out a safe­word).

But I couldn’t agree it isn’t sen­tient, that it’s just squishy clock­work.

Just “ex­pe­rienc­ing pain” is prob­a­bly too wide a net for moral worth, as no­ci­cep­tors are pre­sent in most an­i­mals, in­clud­ing the afore­men­tioned mosquito.

It seems to me that the prob­lem you’re re­ally try­ing to solve is how to sort the world into neat piles marked “okay to in­flict my de­sires on re­gard­less of con­se­quences” and “not okay to do that to.” Which is prob­a­bly me just stat­ing the ob­vi­ous, but the rea­son I call at­ten­tion to it is I liter­ally don’t get that. The uni­verse just is not so tidy; per­son­hood or what­ever word you wish to use is not just one thing, and the things that make it up seem to be­have such that the ques­tion is less like “Is this a car or not?” and more like “Is this car worth 50,000 dol­lars, to me, at this time?”

Suffer­ing is prob­a­bly a more re­stric­tive term, but I am not aware of a mea­surable defi­ni­tion of it.

That is ever the prob­lem—you can’t even tech­ni­cally demon­strate with­out lots of in­fer­ence that your best friend or your mother re­ally suffer. This is why I don’t like draw­ing bi­nary bound­aries on that ba­sis.

It is also prob­a­bly some­times too nar­row, as most of us would find it im­moral to harm peo­ple who do not ex­pe­rience suffer­ing due to a men­tal or a phys­i­cal is­sue, like pain in­sen­si­tivity or asym­bo­lia.

Though strangely enough, plenty of LWers seem to con­sider many di­s­or­ders with similarly per­va­sive con­se­quences for ex­pe­rience to re­sult in “lives barely worth liv­ing...”

• My (but not nec­es­sar­ily yours) con­cern with all this is a ver­sion of the re­pug­nant con­clu­sion: if you as­sign some moral worth to mosquitoes or bac­te­ria, and you al­low for non-asymp­totic ac­cu­mu­la­tion based on the num­ber of spec­i­men, then there is some num­ber of bac­te­ria whose moral worth is at least one hu­man. If you don’t al­low for ac­cu­mu­la­tion, then there is no differ­ence be­tween kil­ling one mosquito and 3^^^3 of them. If you im­pose asymp­totic ac­cu­mu­la­tion (no amount of mosquitoes have moral worth equal to that of one hu­man, or one cat), then the goal­post sim­ply shifts to a differ­ent life­form (how many cats are worth a hu­man?). Im­pos­ing an ar­tifi­cial Schel­ling fence at least pro­vides some solu­tion, though far from uni­ver­sal. Thus I’m OK with ig­nor­ing suffer­ing or moral worth of some life­forms. I would not ap­prove of need­lessly tor­tur­ing them, but mostly be­cause of the an­guish it causes hu­mans like you.

You seem to sug­gest that there is more than one di­men­sion to moral worth, but, just like with util­ity func­tion or with de­on­tolog­i­cal ethics, even­tu­ally it comes down to mak­ing a de­ci­sion, and all your di­men­sions con­verge into one.

• My (but not nec­es­sar­ily yours) con­cern with all this is a ver­sion of the re­pug­nant con­clu­sion: if you as­sign some moral worth to mosquitoes or bac­te­ria, and you al­low for non-asymp­totic ac­cu­mu­la­tion based on the num­ber of spec­i­men, then there is some num­ber of bac­te­ria whose moral worth is at least one hu­man.

Sure, that reg­isters—if there were a thriv­ing micro­bial ecosys­tem on Mars, I’d con­sider it im­moral to wipe it out ut­terly sim­ply for the sake of one hu­man be­ing. Though I think my func­tion-per-in­di­vi­d­ual is more com­pli­cated than that; wiping it out be­cause that one hu­man is a hypochon­driac is more-wrong in my per­cep­tion than wiping it out be­cause, let’s say, that one hu­man is an as­tro­naut stranded in some sort of weird micro­bial mat, and the only way to re­lease them be­fore they die is to let loose an earthly ex­tremophile which will, as a con­se­quence, prop­a­gate across Mars and de­stroy all re­main­ing hold­outs of the lo­cal bio­sphere. That lat­ter is very much more a tossup, such that I don’t view other hu­mans go­ing ‘Duh, save the hu­man!’ as ex­actly com­mit­ting an atroc­ity or com­pounded the wrong. Some­times re­al­ity just pre­sents you with situ­a­tions that are not ideal, or where there is no good choice. No-win situ­a­tions hap­pen, un­satis­fy­ing re­s­olu­tions and all. That doesn’t mean do noth­ing; it just means try­ing to set up my eth­i­cal and moral frame­work to make it im­pos­si­ble feels silly.

Im­pos­ing an ar­tifi­cial Schel­ling fence at least pro­vides some solu­tion, though far from uni­ver­sal.

To be hon­est, that’s all this de­bate re­ally seems to be to me—where do we set that fence? And I’m con­vinced that the de­ci­sion point is more cul­tural and per­sonal than any­thing, such that the re­sult­ing dis­cus­sion does not use­fully gen­er­al­ize.

You seem to sug­gest that there is more than one di­men­sion to moral worth, but, just like with util­ity func­tion or with de­on­tolog­i­cal ethics, even­tu­ally it comes down to mak­ing a de­ci­sion, and all your di­men­sions con­verge into one.

And once I do, even if my de­ci­sion was as ra­tio­nal as it can be un­der the cir­cum­stances and I’ve iden­ti­fied a set of pri­ori­ties most folks would ap­plaud in prin­ci­ple, there’s still the po­ten­tial for re­grets and no-win situ­a­tions. While a moral sys­tem that gen­uinely solved that prob­lem would please me greatly, I see no sign that you’ve stum­bled upon it here.

• I’ve had in­ter­ac­tions with in­sects that I could only parse as “in­ter­ac­tion be­tween two sen­tient beings

Why stop there? Hu­mans have also had in­ter­ac­tions with light­ning that they could only parse as in­ter­ac­tions be­tween two sen­tient be­ings!

• -Possess­ing sen­sory ex­pe­riences (I’m pretty sure in­sects and even worms do that)

Are you claiming that in­sects and worms pos­sess func­tion­ing sense-or­gans, or that they pos­sess sub­jec­tive ex­pe­rience of the re­sult­ing sense-data? I find the lat­ter some­what un­likely wrt in­sects and worms. Re­gard­ing ba­bies, it doesn’t seem “ob­vi­ously un­true” to me that ba­bies lack sub­jec­tive ex­pe­rience. Though, nor does it seem ob­vi­ously true.

• Are you claiming that in­sects and worms pos­sess func­tion­ing sense-or­gans, or that they pos­sess sub­jec­tive ex­pe­rience of the re­sult­ing sense-data?

I’m try­ing to figure out why you think there’s a differ­ence be­tween the two, at least when deal­ing with any­thing pos­sess­ing a ner­vous sys­tem.

• A ner­vous sys­tem is just a lump of mat­ter, the same as any other. Another ob­ject with func­tion­ing sense-or­gans is my lap­top, yet I wouldn’t say my lap­top pos­sesses sub­jec­tive ex­pe­rience.

• A ner­vous sys­tem is just a lump of mat­ter, the same as any other.

So you will have no ob­jec­tion to me re­plac­ing your brain with an in­tri­cately-carved wooden replica, then?

Another ob­ject with func­tion­ing sense-or­gans is my lap­top, yet I wouldn’t say my lap­top pos­sesses sub­jec­tive ex­pe­rience.

1. How would you know if it did?

2. If you don’t think a ner­vous sys­tem is rele­vant there, I’m cu­ri­ous to know what you think is be­hind you hav­ing sub­jec­tive ex­pe­riences, and if you be­lieve in p-zom­bies. Your lap­top doesn’t or­ga­nize that sense in­put and in­te­grate it into a com­plex sys­tem. But even sim­ple or­ganisms do that.

• Your re­sponse sug­gests you do un­der­stand the dis­tinc­tion be­tween pos­sess­ing sen­sory in­for­ma­tion and sub­jec­tive ex­pe­rience of the same. As such, I sup­pose my job here is com­plete. But nev­er­the­less:

The im­por­tant thing is not the com­po­si­tion of an ob­ject, but its func­tion­al­ity. An in­tri­cately-carved wooden ma­chine that cor­rectly car­ried out the func­tion­al­ity of my brain would be a fine re­place­ment, even if it lacks the elan vi­tal neu­ral mat­ter sup­pos­edly has.

My lap­top doesn’t have sub­jec­tive ex­pe­rience. You do. An elephant most likely does. What about Wat­son?. The robots in those robot soc­cer com­pe­ti­tions? Or BigDog?

My opinion on zom­bies is lw stan­dard.

• I wouldn’t say my lap­top pos­sesses sub­jec­tive experience

How would you know if it did?

• Ah! I un­der­stand, now. Thanks for clar­ify­ing.

I mostly un­der­stand “sen­tient” as most peo­ple use the term as the sec­ond mean­ing. Eliezer in par­tic­u­lar seems to use “sen­tient” and “per­son” pretty much in­ter­change­ably here, for ex­am­ple, with­out re­ally defin­ing ei­ther, so I un­der­stand him to use the word similarly.

The lat­ter, ap­plied to the real world, sounds rather more like “soul” than any­thing co­her­ent and ob­vi­ous.

Were I in­clined to turn this as­ser­tion into a ques­tion, it would prob­a­bly be some­thing like “what prop­er­ties does a typ­i­cal adult have that a typ­i­cal 1-year-old lacks which makes it more OK to kill the lat­ter than the former?”

Is that the ques­tion you’re ask­ing?

• Were I in­clined to turn this as­ser­tion into a ques­tion, it would prob­a­bly be some­thing like “what prop­er­ties does a typ­i­cal adult have that a typ­i­cal 1-year-old lacks which makes it more OK to kill the lat­ter than the former?” Is that the ques­tion you’re ask­ing?

More or less, yeah.

• -SF/​F writer’s term for “as­sume this fic­tional en­tity is a per­son” (akin to “sapi­ent”; it’s a bi­nary per­son­hood marker, or a sec­u­larized soul—it tells the reader to re­act ac­cord­ingly to this char­ac­ter’s ex­pe­riences and be­hav­ior)

I re­al­ize you seem to have deleted your ac­count, but: this.

• Albeit suffi­ciently young ba­bies are plau­si­bly not sen­tient.

My su­per-villain side just got slapped by my cen­sors be­fore it could for­mu­late any way to ex­ploit this. I’m still pon­der­ing whether this is a good thing.

• Hmm. I’m not sure I have the same cen­sors.

My su­per-villain side went on to try to de­vise a way to em­u­late the Rai Stones econ­omy us­ing an ab­stract ex­change of not-yet-sen­tient ba­bies and var­i­ous re­lated op­por­tu­nity costs, be­fore re­al­iz­ing that even my su­per-villain side is not good enough at eco­nomics to con­jure effi­cient eco­nomic sys­tems out of thin air like that while mak­ing sure that they benefit him.

Cer­tainly, how­ever, my su­per-villain side did fall back on the sec­ondary, less-de­sir­able op­tion of lend­ing re­sources and med­i­cal as­sis­tance to preg­nant moth­ers, such as to have le­gal own­er­ship claim on the non­sen­tient ba­bies in or­der to re-sell them for ser­vices or work or money to said moth­ers af­ter­wards.

Does it sound like a good thing or a bad thing that I can think of this with­out flinch­ing?

• Given the word­ing of the story, both women were in the prac­tice of sleep­ing di­rectly next to their ba­bies. The other woman didn’t roll over her baby be­cause she was wicked, she rol­led over her baby be­cause it was next to her while she slept. They left out the part where the “good mother” rol­led over her own baby two weeks later and ev­ery­one just threw up their hands and de­clared “What can we do, these things just hap­pen, ya’ know?”

• They left out the part where the “good mother” rol­led over her own baby two weeks later and ev­ery­one just threw up their hands and de­clared “What can we do, these things just hap­pen, ya’ know?”

Co-sleep­ing is con­tro­ver­sial, not one-sided. It seems that co-sleep­ing in­creases the risk of smoth­er­ing but de­creases the risk of SIDS, lead­ing to a net de­crease in in­fant mor­tal­ity. Always be wary of The Seen and The Unseen.

• On the other hand, the ma­jor­ity of re­lated stud­ies seem to be ob­ser­va­tional, rather than in­ter­ven­tional, so it’s quite pos­si­ble that both co-sleep­ing and ob­served “effects” are the re­sult of some third fac­tor, such as the at­ti­tude of the par­ent. For ex­am­ple, it’s likely that a par­ent who chooses to co-sleep is more well-dis­posed to­ward the in­fant, and is there­fore far less likely to kill it de­liber­ately (in­fan­ti­cide), thus mak­ing up some un­known de­crease in the over­all fre­quency of “SIDS”.

• For ex­am­ple, it’s likely that a par­ent who chooses to co-sleep is more well-dis­posed to­ward the in­fant, and is there­fore far less likely to kill it de­liber­ately (in­fan­ti­cide), thus mak­ing up some un­known de­crease in the over­all fre­quency of “SIDS”.

In­deed; this also prob­a­bly ex­plains some of the benefit of room-shar­ing.

• How much is the de­crease? I imag­ine that the effect of be­ing re­spon­si­ble for your child’s death by smoth­er­ing is prob­a­bly a lot more up­set­ting and men­tally dam­ag­ing than that of hav­ing a child die from SIDS. Maybe that’s less­ened by know­ing the above in­for­ma­tion; but most peo­ple don’t.

• How much is the de­crease?

It’s hard to get solid num­bers. Roomshar­ing (which is recom­mended) de­creases SIDS rates by half, which will be the ma­jor­ity of the benefit of a tran­si­tion from own-room sleep­ing to cosleep­ing. It also seems like the over­whelming ma­jor­ity of smoth­er­ing deaths deal in­volve other known risk fac­tors, like smok­ing or drug use by the mother. It’s also fre­quently recom­mended against the in­fant sleep­ing with the father or siblings (by both sides). Epi­demiolog­i­cal stud­ies have the is­sue that cosleep­ing is offi­cially dis­cour­aged.

If you’re adding in psy­cholog­i­cal fac­tors, though, there’s some sug­gest­ing that cosleep­ing is good for the in­fant /​ their later de­vel­op­ment.

As may be un­sur­pris­ing to the cynic, much re­search on in­fant sleep is funded by crib man­u­fac­tur­ers. My read of the is­sue is that cosleep­ing was recom­mended against be­cause of the known dan­ger of smoth­er­ing and the so­cial benefit of parental in­de­pen­dence from the in­fant, and that more in­for­ma­tion is slowly com­ing to light that the in­fant is bet­ter off cosleep­ing with the mother, ex­cept when other risks are pre­sent.

• If you co-sleep in­tel­li­gently, it’s not even much of an is­sue. There’s lots of de­vices, both mod­ern and an­cient, you can use to keep the child within reach but at no risk of rol­ling over them.

• I ex­pected that. My own opinion is that if it is nec­es­sary for some rea­son, it’s a good idea, but per­son­ally I’d rather be pos­si­bly, in­di­rectly, and one in­stance of a poorly un­der­stood syn­drome re­spon­si­ble for my baby’s death than ac­tu­ally be­ing the one that crushed him.

It seems that sleep­ing sep­a­rately very dras­ti­cally de­creases your chances of per­son­ally kil­ling your baby in your sleep.

• Such are your de­sires, then, at the ob­ject level. But do you also de­sire that they be your de­sires? Are you satis­fied with be­ing the sort of per­son who cares more about avoid­ing guilt and per­sonal re­spon­si­bil­ity than about the ac­tual sur­vival and well-be­ing of his/​her child? Or would you change your prefer­ences, if you could?

• My de­sires con­cern­ing what my de­sires should be are also de­ter­mined by my de­sires, so your ques­tion is not valid, it’s a re­cur­sive loop. You are first as­sum­ing that I care about any­thing at all, sec­ondly as­sum­ing that I ex­pe­rience guilt at all, and thirdly that I would care about my chil­dren. As it turns out, you are cor­rect on all three as­sump­tions, just keep in mind that those are not always givens among hu­mans.

What I was say­ing was that in the two situ­a­tions (my child dies due to SIDS), and (my child dies due to me rol­ling over onto him), in the first situ­a­tion not only could I trick my­self into be­liev­ing it wasn’t my fault, it’s also com­pletely pos­si­ble that it re­ally wasn’t my fault, and that it had some other cause; in the sec­ond situ­a­tion, there’s re­ally no ques­tion, and a very con­crete way to pre­vent it.

To an­swer your unasked ques­tion, I still do not alieve that keep­ing my child a safe dis­tance away while sleep­ing but show­ing love and care at all other times in­creases her chance of SIDS. If I was to be shown con­clu­sive re­search of cause and effect be­tween them, I would re­verse my cur­rent opinion, mos’ def.

• Your sec­ond-or­der de­sires are fixed by your de­sires as a whole, triv­ially. But they aren’t fixed by your first-or­der de­sires. So it makes sense for me to ask whether you har­bor a sec­ond-or­der de­sire to change your first-or­der de­sires in this case, or whether you are re­flec­tively satis­fied with your first-or­der de­sires.

Con­sider the al­co­holic who de­sires to stop crav­ing al­co­hol (a sec­ond-or­der de­sire), but who con­tinues to drink al­co­hol (be­cause his first-or­der de­sires are stronger than his de­sire-de­sires). Pre­sum­ably your first-or­der de­sires are cur­rently defeat­ing your sec­ond-or­der ones, else you’d have already switched first-or­der de­sires. But it doesn’t fol­low from this that your sec­ond-or­der de­sires are nonex­is­tent!

Per­haps, for in­stance, your sec­ond-or­der de­sire is strong enough that if you could sim­ply push a but­ton to for­ever effortlessly change your first-or­der de­sires, you would do so; but your sec­ond-or­der de­sire isn’t so strong that you’ll change first-or­der de­sires by willpower alone, with­out hav­ing a magic but­ton to press. This, I think, is an ex­tremely com­mon situ­a­tion hu­mans find them­selves in. So I was cu­ri­ous whether you were satis­fied or un­satis­fied with your cur­rent first-or­der pri­ori­ties.

I still do not alieve that keep­ing my child a safe dis­tance away while sleep­ing but show­ing love and care at all other times in­creases her chance of SIDS. If I was to be shown con­clu­sive re­search of cause and effect be­tween them, I would re­verse my cur­rent opinion, mos’ def.

So it’s not re­ally the case that you’d pri­ori­tize psy­cholog­i­cal-guilt-avoidance over SIDS-avoidance? In that case the ques­tion is less in­ter­est­ing, since it’s just a mat­ter of how well you can think your­self into the hy­po­thet­i­cal in which you have to choose be­tween, say, in­creas­ing your child’s odds of sur­viv­ing by 1% and the cost of, say, in­creas­ing your guilt-if-the-child-does-die by 200%.

• In that case the ques­tion is less in­ter­est­ing, since it’s just a mat­ter of how well you can think your­self into the hy­po­thet­i­cal in which you have to choose be­tween, say, in­creas­ing your child’s odds of sur­viv­ing by 1% and the cost of, say, in­creas­ing your guilt-if-the-child-does-die by 200%.

I guess, but in real life I don’t sit down with a calcu­la­tor to figure that out; I’d set­tle for some defini­tive re­search.

Your sec­ond-or­der de­sires are fixed by your de­sires as a whole, triv­ially. But they aren’t fixed by your first-or­der de­sires. So it makes sense for me to ask whether you har­bor a sec­ond-or­der de­sire to change your first-or­der de­sires in this case, or whether you are re­flec­tively satis­fied with your first-or­der de­sires.

[all that quote], triv­ially. What I am say­ing is that even my “own” de­sires and the goals that I think are right are only what they are be­cause of my biol­ogy and up­bring­ing. If I seek to “de­bug” my­self, it’s still only ac­cord­ing to a value sys­tem that is adapted to per­pet­u­ate our DNA. So to an­swer truth­fully, I am NOT satis­fied with my first-or­der de­sires, in fact I am not satis­fied with be­ing trapped in a hu­man body, from which the first-or­der de­sires are spawned.

• It seems that sleep­ing sep­a­rately very dras­ti­cally de­creases your chances of per­son­ally kil­ling your baby in your sleep.

In the story, maybe. I think nowa­days you can get spe­cially de­signed cribs that sort of merge onto the bed, so you’re co-sleep­ing but can’t roll onto your baby–see http://​​www.arm­sreach.com/​​

• I’m in­volved in a lo­cal Na­tive Amer­i­can com­mu­nity and one of the medicine el­ders I know of­ten makes a sort of de­vice for fam­i­lies with in­fant chil­dren, es­pe­cially ones with colic or other sleep-dis­rupt­ing con­di­tions. It’s kind of a cra­dle-sling type thing you hang se­curely above your own bed; if kiddo’s cry­ing but oth­er­wise okay you can just reach up and rock them, and they’re oth­er­wise within reach. I’ve seen repli­cas of the pre-con­tact ver­sion, and even made of birch­bark and hung from the rafters of a lodge with sinew it’s ev­i­dently still quite sturdy and safe; like, you’d have to knock over the house for it to be an is­sue. Th­ese days, us­ing mod­ern ma­te­ri­als, they’re even safer. So this goes back quite a long way.

• Then I still blame the mother in the story for not build­ing one of those!

That is pretty neat, I whole­heart­edly en­dorse us­ing those, just in case. In the un­likely event that I pro­duce more biolog­i­cal offspring, I will make use of that knowl­edge.

• She’s not seen as evil be­cause she in­ad­ver­tently kil­led her baby, she’s seen as evil be­cause she stole the other woman’s baby and as­sented to kil­ling it. Right?

• It was a prop­erty dis­pute, not a mea­sure­ment of righ­teous­ness. The story served to illus­trate Solomon’s wis­dom; spiritual judg­ment of the women was not an is­sue. As for my opinion, I see both of them as stupid, and only evil to the de­gree that stu­pidity in­fluences evil.

• Ah, I in­ter­preted your com­ment as a re­sponse to the sup­posed judg­ment that the mother whose child died was wicked. That would seem to have been my b.

• Thwarted+joy beats des­o­la­tion+schaden­freude as a util­ity win even if they were di­vid­ing a teddy bear.

• Who says you own a baby just by be­ing its ge­netic mother?

Su­san Okin’s at­tempted re­duc­tio ad adsur­dum of Robert Noz­ick says that. Though ad­mit­tedly she did think that un­der­go­ing the preg­nancy, not just be­ing the ge­netic mother, was re­quired.

• Albeit suffi­ciently young ba­bies are plau­si­bly not sen­tient.

This is why I re­ject bi­nary “sen­tient/​non­sen­tient” crite­ria for moral worth. If men­tally sub­nor­mal adults or small chil­dren are worth­less, then you have fol­lowed sim­plic­ity off a cliff.

In my ex­pert opinion.

• You seem to equate “non­per­son” with “worth­less” here. Do you do that ad­vis­edly, or care­lessly? And if the former, can you sum­ma­rize your rea­sons for con­sid­er­ing non­per­sons worth­less?

[ETA: the par­ent has been ed­ited af­ter this com­ment was writ­ten.]

• Ex­cel­lent point. Edited.

• Fair enough.

Which raises the ques­tion: do you ac­tu­ally know any­one who con­sid­ers small chil­dren worth­less, or are you just brack­et­ing here?

I mean, I know lots of peo­ple who con­sider small chil­dren (and var­i­ous pre­cur­sors to small chil­dren) to have less value than other things they value… in­deed, I don’t know any­one who doesn’t, al­though there are cer­tainly dis­agree­ments about what clears that bar and what doesn’t. But that needn’t in­volve walk­ing off any cliffs… that’s just what it means to live in a world where we some­times have to choose among things of value.

• Well, worth­less is a mild ex­ag­ger­a­tion, but Eliezer has ar­gued that eat­ing ba­bies is jus­tified if they’re young enough. In­fan­ti­cide (or “post-na­tal abor­tion”) is ap­proved of by a small but real minor­ity. I have yet to en­counter any­one who thinks tod­dlers are equiv­a­lent to an­i­mals (who doesn’t use this to ar­gue for an­i­mals’ rights) but I as­sume they ex­ist as a minor­ity of a minor­ity. But if they can talk, most peo­ple are con­vinced. (This does not ap­ply to sign lan­guage, for some rea­son.)

• Does that an­swer your ques­tion?

I’m not sure.

What I get from your an­swer is that you be­lieve there ex­ist peo­ple who sup­port kil­ling chil­dren if they’re young enough, though you haven’t talked to any of them about the pa­ram­e­ters of that sup­port, and you in­fer from that po­si­tion that they value young chil­dren less than they ought to, which is what you meant by con­sid­er­ing young chil­dren “worth­less” in the first place.

That is, as I cur­rently un­der­stand you. your origi­nal sen­ti­ment can be rephrased “If you value small chil­dren less than you ought to, you have fol­lowed sim­plic­ity off a cliff” and you be­lieve Eliezer val­ues small chil­dren less than he ought to, or at the very least has made ar­gu­ments from which one could in­fer that, and that other un­named peo­ple do too..

Have I un­der­stood you cor­rectly?

• Pretty much.

There are some moral the­o­ries that sound sim­ple and rea­son­able in the ab­stract (“max­i­mize hap­piness”, for ex­am­ple) but in re­al­ity do not en­com­pass the full range of hu­man value. There are two pos­si­ble re­sponses to this; you can ei­ther ex­am­ine the ev­i­dence and con­clude you missed some­thing, or you can de­cide your the­ory is self-ev­i­dently true and ev­ery­one else must be bi­ased, and bite the bullet

Of course, ev­ery­one some­times is bi­ased, and some bul­lets should be bit­ten. But when you start ad­vo­cat­ing forcible wire­head­ing (or eat­ing ba­bies) you should at least re­ex­am­ine the ev­i­dence.

Eliezer may be right. But I pre­dict he hasn’t ex­am­ined bi­nary per­son­hood … ever? Re­cently, at any rate.

• OK.

With re­spect to Eliezer in par­tic­u­lar, it would greatly sur­prise me if your dis­agree­ment with him was ac­tu­ally about com­plex­ity of value as you seem to sug­gest here, or about un­ex­am­ined no­tions of bi­nary per­son­hood. That said, my prefer­ence is to let you have your ar­gu­ment with him with him, rather than try­ing to have your ar­gu­ment with him with me.

With re­spect to your gen­eral point, I’m all in fa­vor of re-ex­am­in­ing ev­i­dence when it leads me to un­ex­pected con­clu­sions. But as you say, some bul­lets should be bit­ten… some­times it turns out that ha­bit­ual be­liefs are un­jus­tified, and re-ex­am­in­ing ev­i­dence leads me to re­ject them with greater con­fi­dence.

For my own part, I prob­a­bly value hu­man in­fants less than you think I ought to… though it’s hard to be sure, since I’m not ex­actly sure where you draw the line.

Just to put a line in the sand for cal­ibra­tion: for at least 99.99999% of chil­dren aged 2 years or younger, and a ran­domly cho­sen adult, I would eas­ily en­dorse kil­ling any 10 of the former to save the lat­ter (prob­a­bly larger num­bers as well, but with more difficulty), and I don’t think I’ve walked off any cliffs in the pro­cess.

• Oh, I dare­say I value in­fants more than most peo­ple think I ought to. That’s the prob­lem with con­sis­tency :(

Still, I think it’s fair to say that bi­nary per­son­hood has a prob­lem with the fact that most peo­ple seem to care about things on a slid­ing scale, and it’s prob­a­bly not just bias.

Any­way, seems like this point has been quite thoughrily clar­ified...

• Brecht wrote a play based on the Solomon story where the the birth mother only wants the child be­cause she can’t in­herit with­out him. The judge has a cir­cle of chalk drawn and says the two women are to si­mul­ta­neously try to pull the child from it; if they tear him in half, they will each get their part. The adop­tive mother lets go, and he deems her the true mother.

• It sud­denly oc­curs to me that the first woman is the right choice for rais­ing the child, re­gard­less of who the birth mother is.

In­deed; I strongly sus­pect Solomon had that in mind, but wanted to keep the post as short as pos­si­ble.

I won­der if Solomon had plans in mind if both women had said the same thing.

Quite pos­si­bly. I also won­der if it would de­pend on what they both said- if both vol­un­teered to re­tract their claim, then as wedrifid sug­gests lots were com­monly used to show the will of God. If both re­acted spite­fully, then...

• In­deed; what kind of per­son an­swers like the sec­ond mother? (Well, there’s three mil­len­nia’s worth of mind­ware gap be­tween me and her, but still...)

• You’re fa­mil­iar with the em­piri­cal work on ul­ti­ma­tum games, right? It is com­mon for peo­ple to pre­fer to get noth­ing equitably than to ac­cept an in­equitable split where they are worse off.

• what kind of per­son an­swers like the sec­ond mother?

One who was in­vented for the pur­pose of the story.

• Well, yeah, but… For read­ers to think “wow, that Solomon guy was so wise!” rather than “that’s sup­posed to be a joke, right?”, the char­ac­ters would have to have at least some amount of plau­si­bil­ity in their cul­tural con­text. (Then again, the Bible wasn’t the place where one’d ex­pect to find jokes in the first place.)

• (Then again, the Bible wasn’t the place where one’d ex­pect to find jokes in the first place.)

Per­haps not long form nar­ra­tive jokes, but the bible is ac­tu­ally loaded with hu­morous word play (puns, dou­ble en­ten­dre, etc). Un­for­tu­nately, pretty much all of it re­quires a pretty de­cent un­der­stand­ing of bibli­cal He­brew. I of­ten won­der if bibli­cal liter­al­ists would take such a hard line if they re­al­ized the writer was of­ten writ­ing for word­play as much as for con­vey­ing a mes­sage.

• As I said el­se­where, these sorts of sto­ries (old tes­ta­ment Chuck Nor­ris sto­ries!) aren’t about hu­mor. It’s “yay Solomon!”

• Well, yeah, but… For read­ers to think “wow, that Solomon guy was so wise!” rather than “that’s sup­posed to be a joke, right?”, the char­ac­ters would have to have at least some amount of plau­si­bil­ity in their cul­tural con­text.

Like the plau­si­bil­ity of these sto­ries?

It’s a story about Solomon’s wis­dom. Whether it ac­tu­ally hap­pened is not re­ally the point.

• The Solomon story has always bugged me as be­ing the sort of thing a not-wise per­son would come up with as an ex­am­ple of wis­dom. There are too many ways it could have gone wrong.

I have my own preferred take on the story, and what else that sort of solu­tion might im­ply. In that ver­sion, it ends with

And be­cause he was the king, be­held by his sub­jects with awe and ter­ror, the women did not protest his judg­ment.

And no­body ever both­ered the king with do­mes­tic dis­putes again.

• I think it is re­mark­able how ob­vi­ously childish the style of the “bible” quotes is when com­pared to the de­liber­ately ar­cane “word­ing” of the OP.

I agree with you, I also fail to see any level of so­phis­ti­ca­tion in the bible. If any­thing it is at the same level of “Go god Go” (Must add a dis­claimer here: English is not my na­tive lan­guage so if I say some­thing stupid it is be­cause I am Mex­i­can)

• I of­ten no­tice how peo­ple use ar­gu­ments that fail to dis­t­in­guish the hy­pothe­ses un­der dis­cus­sion. For ex­am­ple, some­one gives an ar­gu­ment that fa­vors their hy­poth­e­sis, but it also hap­pens to fa­vor the op­po­nent’s hy­poth­e­sis to about the same de­gree. In­ter­pret­ing ar­gu­ments in terms of the like­li­hood ra­tio they provide seems like an easy-to-use heuris­tic that fixes such er­rors.

• I of­ten no­tice how peo­ple use ar­gu­ments that fail to dis­t­in­guish the hy­pothe­ses un­der dis­cus­sion. For ex­am­ple, some­one gives an ar­gu­ment that fa­vors their hy­poth­e­sis, but it also hap­pens to fa­vor the op­po­nent’s hy­poth­e­sis to about the same de­gree.

Do you have any ex­am­ples to share? (Not that I don’t be­lieve you. Peo­ple rou­tinely use ar­gu­ments that sup­port the op­po­site po­si­tion to the one they in­tend. Ar­gu­ments that sup­port both equally are bound to oc­cur in be­tween...)

• Sorry, I have bad mem­ory for de­tails of this sort, only re­mem­ber the ab­stract ob­ser­va­tion, which is re­cur­rent enough that I have a cached phrase to iden­tify and point out such situ­a­tions (“This doesn’t dis­t­in­guish the al­ter­na­tives!”). Could make up some ex­am­ples, but I don’t think it’s use­ful for clar­ifi­ca­tion in this case, and it won’t provide fur­ther ev­i­dence for the ex­is­tence of the is­sue.

• Since the el­e­ments of the empty set satisfy ar­bi­trary prop­er­ties, all the ex­am­ples you pro­vided are tech­ni­cally ev­i­dence in fa­vor for your ob­ser­va­tion. Also, against it.

• Heh <3

It’s hard to find this kind of hu­mor any­where else than LW and XKCD.

• Ac­tu­ally, SMBC comics tends to be bet­ter than ei­ther.

• If the woman who lost hadn’t been so com­pre­hen­sively messed up in the head, you would’ve had an ex­am­ple in the OP. I won­der if there was a similar test more likely to suc­ceed.

• I have a the­ory that ev­ery­one does this, and it’s a way for our brains to save space some­how. Just keep track of the rate at which things tend to oc­cur in­stead of record­ing and cat­a­loging ev­ery ex­pe­rience.

• The anal­ogy seems a bit tor­tu­ous… Bayes wasn’t needed to un­der­stand the story, and see­ing the story in the light of Bayes doesn’t seem to add any new un­der­stand­ing—at least, in my opinion.

• What other sto­ries do you know that show this sort of qual­i­ta­tive Bayesian think­ing?

I strongly sus­pect there are other sto­ries where this sort of in­ter­pre­ta­tion seems nat­u­ral, but as the mem­ory of this story and its in­ter­pre­ta­tion floated into my mem­ory un­bid­den, I am not sure where to look for oth­ers.

• The Boy Who Cried Wolf is a pretty good ex­am­ple of up­dat­ing on new in­for­ma­tion, I guess.

But it seems sort of pointless to at­tempt to find old sto­ries that show the su­pe­ri­or­ity of a sup­pos­edly new way of think­ing. If the way of think­ing is so new, then why should we ex­pect to find sto­ries about it? And if we do, what does that say about the su­pe­ri­or­ity of the method (that is, that it was known N years ago but didn’t take over the world)? Per­haps this is too cyn­i­cal?

• The Boy Who Cried Wolf is a pretty good ex­am­ple of up­dat­ing on new in­for­ma­tion, I guess.

Agreed, but the pri­mary les­son of that story is “guard your rep­u­ta­tion if you want to be be­lieved.” The re­verse story—”don’t waste your time on liars”—prob­a­bly shouldn’t end with there ac­tu­ally be­ing a wolf, as one should not ex­pect listen­ers to un­der­stand the some­times sub­tle sep­a­ra­tion be­tween good de­ci­sion-mak­ing and good con­se­quences.

But it seems sort of pointless to at­tempt to find old sto­ries that show the su­pe­ri­or­ity of a sup­pos­edly new way of think­ing.

New sto­ries are use­ful too.

I also wouldn’t call ra­tio­nal­ity a new way of think­ing, any more than I would call sci­ence a new way of think­ing. Both are ac­tive fields of re­search and de­vel­op­ment. Both have trans­for­ma­tive mile­stones, such that you might want to call sci­ence be­fore X ‘pro­to­science’ in­stead of ‘sci­ence’, but only in the same way that mod­ern sci­ence is ‘pro­to­science’ be­cause Y hasn’t hap­pened yet.

It’s also worth not­ing that the re­search and de­vel­op­ment of­ten makes old ideas more pre­cise. Peo­ple ran em­piri­cal tests be­fore they knew what em­piri­cism was. Similarly, we should ex­pect to see peo­ple act­ing clev­erly be­fore a sys­tem­atic way to act clev­erly was de­vel­oped.

And if we do, what does that say about the su­pe­ri­or­ity of the method (that is, that it was known N years ago but didn’t take over the world)?

A meme’s re­pro­duc­tive suc­cess and its de­sir­a­bil­ity for its host can differ sig­nifi­cantly.

• The re­verse story—”don’t waste your time on liars”—prob­a­bly shouldn’t end with there ac­tu­ally be­ing a wolf, as one should not ex­pect listen­ers to un­der­stand the some­times sub­tle sep­a­ra­tion be­tween good de­ci­sion-mak­ing and good con­se­quences.

The les­son of the story (for the towns­peo­ple), is that when your test (the boy) turns out to be un­re­li­able, you should de­vise a new test (re­place him with some­body who doesn’t lie).

• If the way of think­ing is so new, then why should we ex­pect to find sto­ries about it?

To quote from the guy this story was about, “there is noth­ing new un­der the sun”. At least noth­ing di­rectly re­lated to our wet­ware. So we should ex­pect that ev­ery now and then peo­ple stum­bled upon a “good way of think­ing”, and when they did, the re­sults were good. They might just not man­age to iden­tify what ex­actly made the method good, and to repli­cate it.

Also, as MaoShan said, this is kind of Proto-Bayes, 101 think­ing. What we now have is this, but sys­tem­at­i­cally im­proved over many iter­a­tions.

(that is, that it was known N years ago but didn’t take over the world)?

“Tak­ing over the world” is a com­plex mix of effec­tive­ness, pop­u­lar­ity, luck and cul­tular fac­tors. You can see this a lot in the do­main of pro­gram­ming lan­guages. With ways of think­ing it is even more difficult, be­cause—as op­posed to pro­gram­ming lan­guages—most peo­ple don’t learn them ex­plic­itly and don’t eval­u­ate them based on re­sults/​”fea­tures”.

• No, as you can see by the amount of ob­jec­tions, you are not too cyn­i­cal. It’s closer to a sort of Proto-Bayes, sto­ries like this show that that kind of think­ing can turn out wise solu­tions; Bayesian think­ing as it is un­der­stood now is more re­fined.

• My brother had a swing dance unit in mid­dle school and he said ev­ery­one he talked to was whin­ing and say­ing it was go­ing to be awful. I asked him if he thought ev­ery­one ac­tu­ally be­lieved it was go­ing to be awful or if they were just say­ing that be­cause thought it would be un­cool to be rea­son­able and not whiny. We hy­poth­e­sized that peo­ple in the un­cool camp would be more likely to make fun of him if he said that he didn’t think swing danc­ing was go­ing to be that bad. Also maybe they’d be less likely to be con­vinced be­cause nor­mally peo­ple who think some­thing’s go­ing to be awful ac­cept re­as­surance that it’s not.

I think we don’t have re­sults yet. Also his “ev­ery­one” is prob­a­bly only ~10 peo­ple.

• Solomon wasn’t ac­tu­ally us­ing Bayes here.

The prior here (A has stolen B’s baby) is ac­tu­ally quite low. It just doesn’t hap­pen very of­ten. Of course, Solomon ac­tu­ally has to con­sider some ex­tra ev­i­dence (B has ac­cused A of steal­ing her baby). Solomon (by your ac­count) doesn’t con­sider these things at all.

Solomon’s anal­y­sis only con­sid­ered the like­li­hood given a sin­gle test.

• The prior here (A has stolen B’s baby) is ac­tu­ally quite low.

Ir­rele­vant, be­cause it is cer­tain that one of them at­tempted to steal the other’s baby: the ques­tion is whether it was by a mid­night baby-swap, or by bear­ing false wit­ness. What’s your prior for the like­li­hood of at­tempt­ing by each method con­di­tioned on that an at­tempt was made? (Note that it could even be a con­junc­tion- when the baby-swap fails, rush to the court and claim that she at­tempted a baby-swap!)

• It could also be an er­ror; maybe B was so blinded by grief that she re­fused to be­lieve that her own baby had died. (But again, not re­ally the point; the point is that the ar­ti­cle has noth­ing what­so­ever to do with Bayes).

• It could also be an er­ror; maybe B was so blinded by grief that she re­fused to be­lieve that her own baby had died.

I’m wrap­ping that into ‘false wit­ness,’ since in­ten­tions aren’t par­tic­u­larly im­por­tant about the truth of events.

the point is that the ar­ti­cle has noth­ing what­so­ever to do with Bayes

Would you care to ex­pand on this point? The story ob­vi­ously pre­dates Bayes, and so doesn’t use any of the ter­minol­ogy or ex­plic­itly show the pro­cess, but it seems to me like a good ex­am­ple of when and how Bayesian think­ing would be use­ful, and if I’m miss­ing some­thing it would prob­a­bly be rather use­ful to know.

• the point is that the ar­ti­cle has noth­ing what­so­ever to do with Bayes

Would you care to ex­pand on this point?

Bayes goes like this: P(H|E) = P(E|H)*P(H)/​P(E). Here, Solomon con­sid­ers P(E|H) (and P(~E|H)) -- but he doesn’t con­sider P(H) at all. In short, he could eas­ily be a fre­quen­tist and use the same method and come to the same con­clu­sion.

• I in­ter­preted it as:

P(First Wo­man)=.5; P(Se­cond Wo­man)=1-P(First Wo­man)=.5.

(This is a sim­plify­ing as­sump­tion, since those aren’t ac­tu­ally ex­haus­tive and mu­tu­ally ex­clu­sive.)

He then de­cides to test Re­ac­tion, since he ex­pects that P(Re­ac­tion|First Wo­man) and P(Re­ac­tion|~First Wo­man) are sig­nifi­cantly differ­ent. The test works, and then he calcu­lates P(First Wo­man|Re­ac­tion) eas­ily.

In short, he could eas­ily be a fre­quen­tist and use the same method and come to the same con­clu­sion.

I don’t see the alge­bra of Bayes as par­tic­u­larly im­por­tant. Most peo­ple shouldn’t trust them­selves to do alge­bra cor­rectly with­out a calcu­la­tor when im­por­tant things are on the line, and many prac­ti­cal ap­pli­ca­tions re­quire Bayes nets that are large enough that it is wise to seek com­puter as­sis­tance in nav­i­gat­ing them.

To the ex­tent that there is a differ­ence be­tween Bayesi­ans and Fre­quen­tists, it’s a dis­agree­ment about in­ter­pre­ta­tions, not math. It’s not like Fre­quen­tists dis­agree with P(H|E) = P(E|H)*P(H)/​P(E), or have sworn not to use it!

Part of what I want to do with this post (and any other sto­ries that peo­ple can find) is to high­light the qual­i­ta­tive side of Bayes. Some­one who un­der­stands the alge­bra but doesn’t no­tice when their life pre­sents them with op­por­tu­ni­ties to use it is not get­ting as much out of Bayes as they could.

What I would call the three main com­po­nents of Bayes are ex­plic­itly con­sid­er­ing hy­pothe­ses, ex­plic­itly search­ing for tests with high like­li­hood ra­tios, rather than just high like­li­hoods, and ex­plic­itly in­cor­po­rat­ing prior in­for­ma­tion. I’m con­tent with ex­am­ples that show off only one of those com­po­nents.

• To the ex­tent that there is a differ­ence be­tween Bayesi­ans and Fre­quen­tists, it’s a dis­agree­ment about in­ter­pre­ta­tions, not math. It’s not like Fre­quen­tists dis­agree with P(H|E) = P(E|H)*P(H)/​P(E), or have sworn not to use it!

There are at least two mean­ings to the Bayesian/​fre­quen­tist de­bate; one is a dis­agree­ment about meth­ods (or at least a differ­ent set of tools), and the other is a dis­agree­ment about the deeper mean­ing of prob­a­bil­ity. This is an ar­ti­cle about meth­ods, not mean­ing. The ma­jor differ­ence is that Bayesian meth­ods make the prior ex­plicit. The p-value is, per­haps, the quintessen­tial fre­quen­tist statis­tic. Here, we can eas­ily imag­ine Solomon pub­lish­ing his pa­per in the An­cient Jour­nal of Statis­ti­cal Law and cit­ing a p-value < 0.001 -- but with­out know­ing the ac­tual P(defen­dant), we don’t know how many times he made the cor­rect de­ci­sion (in terms of the facts; as noted in an­other thread, from a child-welfare per­spec­tive, the de­ci­sion was prob­a­bly cor­rect re­gard­less).

What I would call the three main com­po­nents of Bayes are ex­plic­itly con­sid­er­ing hy­pothe­ses, ex­plic­itly search­ing for tests with high like­li­hood ra­tios, rather than just high like­li­hoods, and ex­plic­itly in­cor­po­rat­ing prior in­for­ma­tion. I’m con­tent with ex­am­ples that show off only one of those com­po­nents.

Fre­quen­tists also care about high like­li­hood ra­tios.

• The prior here (A has stolen B’s baby) is ac­tu­ally quite low. It just doesn’t hap­pen very of­ten.

I know I’m nit­pick­ing, but is the prior re­ally that low in Solomon’s case ? In our mod­ern times, things like that al­most never hap­pen, but Solomon was liv­ing in Old Tes­ta­ment times (metaphor­i­cally speak­ing, see­ing as the Solomon we’re talk­ing about here is just a char­ac­ter in a book). And the Old tes­ta­ment makes few le­gal dis­tinc­tions be­tween chil­dren and other kinds of prop­erty. Steal­ing them would still be a big deal, but hardly im­prob­a­ble.

• OK, then maybe the prior is high. So what? The point is that Solomon didn’t con­sider it. I’m not say­ing his test was use­less or his de­ci­sion was wrong. I’m say­ing that the word “Bayes” is be­ing used as an ap­plause light rather than for its mean­ing!

• Yeah, that’s why I said I was merely nit­pick­ing.

• What if they had both two-boxed? Solomega has to keep his cred­i­bil­ity …

• Bayes is what the ter­ri­tory feels like from the in­side.