Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning

Imag­ine that you are be­ing asked a ques­tion; a moral ques­tion in­volv­ing an imag­i­nary world. From the prior ex­pe­rience with peo­ple, you have learnt that peo­ple be­have in a cer­tain way; peo­ple are, for the most part, ap­plied thinkers and what­ever is your an­swer, it will be­come a cached thought that will be ap­plied in the real world, should the situ­a­tion arise. The whole ra­tio­nale be­hind think­ing of imag­i­nary wor­lds may be to cre­ate cached thoughts.

Your an­swer prob­a­bly won’t stay seg­re­gated in the well defined imag­i­nary world for any longer than it takes the per­son who asked the ques­tion to switch the topic; it is the real world con­se­quences you should be most con­cerned about.

Given this, would it not be ra­tio­nal to per­haps miss the point but an­swer that sort of ques­tion in the real world way?

To give a spe­cific ex­am­ple, con­sider this ques­tion from The Least Con­ve­nient Pos­si­ble World :

You are a doc­tor in a small ru­ral hos­pi­tal. You have ten pa­tients, each of whom is dy­ing for the lack of a sep­a­rate or­gan; that is, one per­son needs a heart trans­plant, an­other needs a lung trans­plant, an­other needs a kid­ney trans­plant, and so on. A trav­el­ler walks into the hos­pi­tal, men­tion­ing how he has no fam­ily and no one knows that he’s there. All of his or­gans seem healthy. You re­al­ize that by kil­ling this trav­el­ler and dis­tribut­ing his or­gans among your pa­tients, you could save ten lives. Would this be moral or not?

First of all, note that the ques­tion is not ab­stract “If [you are ab­solutely cer­tain that] the only way to save 10 in­no­cent peo­ple is to kill 1 in­no­cent per­son, is it moral to kill?” . There’s a lot of de­tails. We are even told that this 1 is a trav­el­ler, I am not ex­actly sure why but I would think that it refer­ences kin se­lec­tion re­lated in­stincts; the trav­el­ler has lower util­ity to the village than a res­i­dent.

In light of how peo­ple pro­cess an­swers to such de­tailed ques­tions, and how the an­swers are in­cor­po­rated into the thought pat­terns—which might end up used in the real world—is it not in fact most ra­tio­nal not to ad­dress that kind of ques­tion ex­actly as speci­fied, but to point out that one of the pa­tients could be taken apart for the best of other 9 ? And to point out the poor qual­ity of life and life ex­pec­tancy of the sur­viv­ing pa­tients?

In­deed, as a solu­tion one could gather all the pa­tients and let them dis­cuss how they solve the prob­lem; per­haps one will de­cide to be ter­mi­nated, per­haps they will de­cide to draw straws, per­haps those with the worst prog­no­sis will draw the straws. If they’re co­matose one could have a panel of 12 peers make the de­ci­sion. There could eas­ily be trillions of pos­si­ble solu­tions to this not-so-ab­stract prob­lem, and the trillions is not a figure of speech here. Priv­ileg­ing one solu­tion is similar to priv­ileg­ing a hy­poth­e­sis.

In this ex­am­ple, the util­ity of any villager can be higher to the doc­tor than of the trav­el­ler who will never re­turn, and hence the doc­tor would opt to take apart the trav­el­ler for the spare parts, in­stead of pick­ing one of the pa­tients based on some cost-benefit met­ric and sac­ri­fic­ing that pa­tient for the best of the oth­ers. The choice we’re asked about turn out to be just one of the op­tions, cho­sen self­ishly; it is deep self­ish­ness of the doc­tor that makes him re­al­ize that kil­ling the trav­el­ler may be jus­tified, but not re­al­ize the same about one of the pa­tients, for the self­ish­ness did bias his thought to­wards ex­plor­ing one line of rea­son­ing but not the other.

Of course one can say that I missed the point, and one can em­ploy back­ward rea­son­ing and tweak the ex­am­ple by stat­ing that those peo­ple are aliens, and the trav­el­ler is to­tally his­to­com­pat­i­ble with each pa­tient, but none of the pa­tients are com­pat­i­ble with each other (that’s how alien im­mune sys­tems work: there are some rare mu­tant aliens whose tis­sues are not at all re­jected by any other).

But to do so would be to com­pletely lose the point of why we should ex­pend men­tal effort to search for al­ter­na­tive solu­tions. Yes it is defen­sive think­ing—what does it defend us from though? In this case it defends us from mak­ing a de­ci­sion based on in­com­plete rea­son­ing or a faulty model. All real world de­ci­sions are, too, made in imag­i­nary wor­lds—in what we imag­ine the world to be.

Mo­ral­ity re­quires a sort of ‘due pro­cess’; the good faith rea­son­ing effort to find the best solu­tion rather than the first solu­tion that the self­ish sub­rou­tines con­ve­niently pre­sent for con­sid­er­a­tion; to ex­plore the mod­els for faults; to try and think out­side the highly ab­bre­vi­ated ver­sion of the real world one might ini­tially con­struct when con­sid­er­ing the cir­cum­stances.

The imag­i­nary world situ­a­tion here is just an ex­am­ple; and so is the an­swer an ex­am­ple of rea­son­ing that should be ap­plied to such situ­a­tions—the rea­son­ing that strives to ex­plore the solu­tion space and test the model for ac­cu­racy.

Some­thing else which is tan­gen­tial to the main point of this ar­ti­cle. If I had 10 differ­ently bro­ken cars and 1 work­ing one, I wouldn’t even think of tak­ing apart the work­ing one for spare parts, I’d take apart one of the bro­ken ones for spare parts. Same would ap­ply to e.g. hav­ing 11 chil­dren, 1 healthy, 10 in need of re­place­ment of differ­ent or­gans. The op­tion that one would be think­ing of is to take the one that’s least likely to sur­vive, sac­ri­fice for other 9; no one in their mind would even think of tak­ing apart the healthy one un­less there’s very com­pel­ling prior rea­sons. This seem to be some­thing that we would only con­sider for any time for a stranger. There may be hid­den kin se­lec­tion based cog­ni­tive bi­ases that af­fect our moral rea­son­ing.

edit: I don’t know if it is OK to be edit­ing pub­lished ar­ti­cles but I’m a bit of ob­ses­sively com­pul­sive perfec­tion­ist and I plan on im­prov­ing it for pub­lish­ing it in less­wrong (edit: i mean not less­wrong dis­cus­sion), so I am go­ing to take liberty at im­prov­ing some of the points but per­haps also re­mov­ing the du­pli­cate ar­gu­men­ta­tion and cut­ting down the ver­bosity.