(Moral) Truth in Fiction?

A com­ment by Anony­mous on Three Wor­lds Col­lide:

After read­ing this story I feel my­self agree­ing with Eliezer more on his views and that seems to be a sign of ma­nipu­la­tion and not of ra­tio­nal­ity.

Philos­o­phy ex­pressed in form of fic­tion seems to have a very strong effect on peo­ple—even if the fic­tion isn’t very good (ref. Ayn Rand).

Robin has similar qualms:

Since peo­ple are in­con­sis­tent but re­luc­tant to ad­mit that fact, their moral be­liefs can be in­fluenced by which moral dilem­mas they con­sider in what or­der, es­pe­cially when writ­ten by a good writer. I ex­pect Eliezer chose his dilem­mas in or­der to move read­ers to­ward his preferred moral be­liefs, but why should I ex­pect those are bet­ter moral be­liefs than those of all the other au­thors of fic­tional moral dilem­mas?

If I’m go­ing to read a liter­a­ture that might in­fluence my moral be­liefs, I’d rather read pro­fes­sional philoso­phers and other aca­demics mak­ing more ex­plicit ar­gu­ments.

I replied that I had taken con­sid­er­able pains to set out the ex­plicit ar­gu­ments be­fore dar­ing to pub­lish the story. And more­over, I had gone to con­sid­er­able length to pre­sent the Su­per­happy ar­gu­ment in the best pos­si­ble light. (The op­pos­ing view­point is the coun­ter­part of the villain; you want it to look as rea­son­able as pos­si­ble for pur­poses of dra­matic con­flict, the same prin­ci­ple whereby Frodo con­fronts the Dark Lord Sau­ron rather than a cock­roach.)

Robin didn’t find this con­vinc­ing:

I don’t think read­ers should much let down their guard against com­mu­ni­ca­tion modes where sneaky per­sua­sion is more fea­si­ble sim­ply be­cause the au­thor has made some more ex­plicit ar­gu­ments el­se­where… Aca­demic philos­o­phy offers ex­em­plary for­mats and styles for low-sneak ways to ar­gue about val­ues.

I think that this un­der­states the power and util­ity of fic­tion. I once read a book that was called some­thing like “How to Read” (no, not “How to Read a Book”) which said that non­fic­tion was about com­mu­ni­cat­ing knowl­edge, while fic­tion was about com­mu­ni­cat­ing ex­pe­rience.

If I want to com­mu­ni­cate some­thing about the ex­pe­rience of be­ing a ra­tio­nal­ist, I can best do it by writ­ing a short story with a ra­tio­nal­ist char­ac­ter. Not only would iden­ti­cal ab­stract state­ments about proper re­sponses have less im­pact, they wouldn’t even com­mu­ni­cate the same thought.

From The Failures of Eld Science:

″...Work ex­pands to fill the time al­lot­ted, as the say­ing goes. But peo­ple can think im­por­tant thoughts in far less than thirty years, if they ex­pect speed of them­selves.” Jeffreys­sai sud­denly slammed down a hand on the arm of Bren­nan’s chair. How long do you have to dodge a thrown knife?

“Very lit­tle time, sen­sei!”

Less than a sec­ond! Two op­po­nents are at­tack­ing you! How long do you have to guess who’s more dan­ger­ous?

“Less than a sec­ond, sen­sei!”

The two op­po­nents have split up and are at­tack­ing two of your girlfriends! How long do you have to de­cide which one you truly love?

“Less than a sec­ond, sen­sei!”

A new ar­gu­ment shows your pre­cious the­ory is flawed! How long does it take you to change your mind?

“Less than a sec­ond, sen­sei!”

WRONG! DON’T GIVE ME THE WRONG ANSWER JUST BECAUSE IT FITS A CONVENIENT PATTERN AND I SEEM TO EXPECT IT OF YOU! How long does it re­ally take, Bren­nan?”

Sweat was form­ing on Bren­nan’s back, but he stopped and ac­tu­ally thought about it -

ANSWER, BRENNAN!”

“No sen­sei! I’m not finished think­ing sen­sei! An an­swer would be pre­ma­ture! Sen­sei!

Very good! Con­tinue! But don’t take thirty years!

This is an ex­pe­rience about how to avoid com­plet­ing the pat­tern when the pat­tern hap­pens to be blatantly wrong, and how to think quickly with­out think­ing too quickly.

For­get the ques­tion of whether you can write the equiv­a­lent ab­stract ar­gu­ment that com­mu­ni­cates the same thought in less space. Can you do it at all? Is there any se­ries of ab­stract ar­gu­ments that cre­ates the same learn­ing ex­pe­rience in the reader? En­ter­ing a se­ries of be­lieved propo­si­tions into your be­lief pool is not the same as feel­ing your­self in some­one else’s shoes, and re­act­ing to the ex­pe­rience, and form­ing an ex­pe­ri­en­tial skill-mem­ory of how to do it next time.

And it seems to me that to com­mu­ni­cate ex­pe­rience is a valid form of moral ar­gu­ment as well.

Un­cle Tom’s Cabin was not just a his­tor­i­cally pow­er­ful ar­gu­ment against slav­ery, it was a valid ar­gu­ment against slav­ery. If hu­man be­ings were con­structed with­out mir­ror neu­rons, if we didn’t hurt when we see a nonen­emy hurt­ing, then we would ex­ist in the refer­ence frame of a differ­ent moral­ity, and we would de­cide what to do by ask­ing a differ­ent ques­tion, “What should* we do?” Without that abil­ity to sym­pa­thize, we might think that it was perfectly all right* to keep slaves. (See Insep­a­rably Right and No Li­cense To Be Hu­man.)

Put­ting some­one into the shoes of a slave and let­ting their mir­ror neu­rons feel the suffer­ing of a hus­band sep­a­rated from a wife, a mother sep­a­rated from a child, a man whipped for re­fus­ing to whip a fel­low slave—it’s not just per­sua­sive, it’s valid. It fires the mir­ror neu­rons that phys­i­cally im­ple­ment that part of our moral frame.

I’m sure many have turned against slav­ery with­out read­ing Un­cle Tom’s Cabin—maybe even due to purely ab­stract ar­gu­ments, with­out ever see­ing the carv­ing “Am I Not a Man and a Brother?” But for some peo­ple, or for a not-much-differ­ent in­tel­li­gent species, read­ing Un­cle Tom’s Cabin might be the only ar­gu­ment that can turn you against slav­ery. Any amount of ab­stract ar­gu­ment that didn’t fire the ex­pe­ri­en­tial mir­ror neu­rons, would not ac­ti­vate the part of your im­plicit should-func­tion that dis­liked slav­ery. You would just seem to be mak­ing a good profit on some­thing you owned.

Can fic­tion be abused? Of course. Sup­pose that blacks had no sub­jec­tive ex­pe­riences. Then Un­cle Tom’s Cabin would have been a lie in a deeper sense than be­ing fic­tional, and any­one moved by it would have been de­ceived.

Or to give a more sub­tle case not in­volv­ing a di­rect “lie” of this sort: On the SL4 mailing list, Stu­art Arm­strong posted an ar­gu­ment against TORTURE in the in­fa­mous Tor­ture vs. Dust Specks de­bate, con­sist­ing of a short story de­scribing the fate of the per­son to be tor­tured. My re­ply was that the ap­pro­pri­ate coun­ter­ar­gu­ment would be 3^^^3 sto­ries about some­one get­ting a dust speck in their eye. I ac­tu­ally did try to send a long mes­sage con­sist­ing only of

DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK

for a thou­sand lines or so, but the mailing soft­ware stopped it. (Ideally, I should have cre­ated a web­page us­ing Javascript and bignums, that, if run on a suffi­ciently large com­puter, would print out ex­actly 3^^^3 copies of a brief story about some­one get­ting a dust speck in their eye. It prob­a­bly would have been the world’s longest finite web­page. Alas, I lack time for many of my good ideas.)

Then there’s the sort of stan­dard polemic used in e.g. At­las Shrugged (as well as many less fa­mous pieces of sci­ence fic­tion) in which Your Beliefs are put into the minds of strong em­pow­ered no­ble heroes, and the Op­pos­ing Beliefs are put into the mouths of evil and con­temptible villains, and then the con­se­quences of Their Way are de­picted as uniformly dis­as­trous while Your Way offers but­terflies and ap­ple pie. That’s not even sub­tle, but it works on peo­ple pre­dis­posed to hear the mes­sage.

But to en­tirely turn your back on fic­tion is, I think, tak­ing it too far. Ab­stract ar­gu­ment can be abused too. In fact, I would say that ab­stract ar­gu­ment is if any­thing eas­ier to abuse be­cause it has more de­grees of free­dom. Which is eas­ier, to say “Slav­ery is good for the slave”, or to write a be­liev­able story about slav­ery benefit­ing the slave? You can do both, but the sec­ond is at least more difficult; your brain is more likely to no­tice the non-se­quiturs when they’re played out as a writ­ten ex­pe­rience.

Sto­ries may not get us com­pletely into Near mode, but they get us closer into Near mode than ab­stract ar­gu­ment. If it’s words on pa­per, you can end up be­liev­ing that you ought to do just about any­thing. If you’re in the shoes of a char­ac­ter en­coun­ter­ing the ex­pe­rience, your re­ac­tions may be harder to twist.

Con­trast a ver­bal ar­gu­ment against the ver­bal be­lief that “non-Catholics go to Hell”; ver­sus read­ing a story about a good and de­cent per­son, who hap­pens to be a Protes­tant, and dies try­ing to save a child’s life, who is con­demned to hell and has molten lead poured down her throat; ver­sus the South Park epi­sode where a crowd of newly dead souls is at the en­trance to hell, and the Devil says, “Sorry, it was the Mor­mons” and ev­ery­one goes “Awwwww...”

Yes, ab­strac­tion done right can keep you go­ing where con­crete vi­su­al­iza­tion breaks down—the tor­ture vs. dust specks thing be­ing an archety­pal ex­am­ple; you can’t ac­tu­ally vi­su­al­ize that many dust specks, but if you try to choose SPECKS you’ll end up with cir­cu­lar prefer­ences. But so far as I can or­ga­nize my metaethics, the ground level of moral­ity lies in our prefer­ences over par­tic­u­lar, con­crete situ­a­tions—and when these can be com­pre­hended as con­crete images at all, it’s best to vi­su­al­ize them as con­cretely as pos­si­ble. Un­less we know speci­fi­cally where the con­crete image is go­ing wrong, and have to ap­ply an ab­stract cor­rec­tion. The moral ab­strac­tion is built on top of the ground level.

I am also, of course, wor­ried about the idea that sto­ries aren’t “re­spectable” be­cause they don’t look suffi­ciently solemn and dull; or the idea that some­thing isn’t “re­spectable” if can be un­der­stood by a mere pop­u­lar au­di­ence. Yes, there are tech­ni­cal fields that are gen­uinely im­pos­si­ble to ex­plain to your grand­mother in an hour; but ce­teris paribus, peo­ple who can write at a more pop­u­lar level with­out dis­tort­ing tech­ni­cal re­al­ity are perform­ing a huge ser­vice to that field. I’ve heard that Carl Sa­gan was held in some dis­re­pute by his peers for the crime of speak­ing to the gen­eral pub­lic. If true, this is merely stupid.

Ex­plain­ing things is hard. Ex­plain­ers need ev­ery tool they can get their hands on—as a mat­ter of pub­lic in­ter­est.

And in moral philos­o­phy—well, I sup­pose it could be the case that moral philoso­phers have dis­cov­ered moral truths that are de­duc­tive con­se­quences of most hu­mans’ moral frames, but which are so difficult and tech­ni­cal that they sim­ply can’t be ex­plained to a pop­u­lar au­di­ence within a one-hour lec­ture. But it would be a tad more sus­pi­cious than the cor­re­spond­ing case in, say, physics.

I re­al­ize that I speak as some­one who does a lot of pop­u­lariz­ing, but even so—fic­tion ought to be a re­spectable form of moral ar­gu­ment. And a re­spectable way of com­mu­ni­cat­ing ex­pe­riences, in par­tic­u­lar the ex­pe­rience of ap­ply­ing cer­tain types of think­ing skills.

I’ve always been of two minds about pub­lish­ing longer fic­tion pieces about the fu­ture and its con­se­quences. Not so much be­cause of the po­ten­tial for abuse, but be­cause even when not abused, fic­tion can still by­pass crit­i­cal fac­ul­ties and end up poured di­rectly into the brains of at least some read­ers. Tel­ling peo­ple about the log­i­cal fal­lacy of gen­er­al­iza­tion from fic­tional ev­i­dence doesn’t make it go away; peo­ple may just go on gen­er­al­iz­ing from the story as though they had ac­tu­ally seen it hap­pen. And you sim­ply can’t have a story that’s a ra­tio­nal pro­jec­tion; it’s not just a mat­ter of plot, it’s a mat­ter of the story need­ing to be spe­cific, rather than de­pict­ing a state of epistemic un­cer­tainty.

But to make shorter philo­soph­i­cal points? Sure.

And… oh, what the hell. Just on the off-chance, are there any OB read­ers who could get a good movie made? Either out­side Hol­ly­wood, or able to by­pass the usual dumb­ing-down pro­cess that cre­ates a money-los­ing flop? The prob­a­bil­ities are in­finites­i­mal, I know, but I thought I’d check.