Transhumanism as Simplified Humanism

This es­say was origi­nally posted in 2007.


Frank Sul­loway once said: “Ninety-nine per cent of what Dar­wi­nian the­ory says about hu­man be­hav­ior is so ob­vi­ously true that we don’t give Dar­win credit for it. Iron­i­cally, psy­cho­anal­y­sis has it over Dar­winism pre­cisely be­cause its pre­dic­tions are so out­landish and its ex­pla­na­tions are so coun­ter­in­tu­itive that we think, Is that re­ally true? How rad­i­cal! Freud’s ideas are so in­trigu­ing that peo­ple are will­ing to pay for them, while one of the great dis­ad­van­tages of Dar­winism is that we feel we know it already, be­cause, in a sense, we do.”

Sup­pose you find an un­con­scious six-year-old girl ly­ing on the train tracks of an ac­tive railroad. What, morally speak­ing, ought you to do in this situ­a­tion? Would it be bet­ter to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a de­bil­i­tat­ing but non­fatal ill­ness that will severely re­duce his qual­ity of life – is it bet­ter to cure him, or not cure him?

Oh, and by the way: This is not a trick ques­tion.

I an­swer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The ob­vi­ous an­swer isn’t always the best choice, but some­times it is.

I won’t be lauded as a brilli­ant ethi­cist for my judg­ments in these two eth­i­cal dilem­mas. My an­swers are not sur­pris­ing enough that peo­ple would pay me for them. If you go around pro­claiming “What does two plus two equal? Four!” you will not gain a rep­u­ta­tion as a deep thinker. But it is still the cor­rect an­swer.

If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a de­bil­i­tat­ing dis­ease, it is good to cure them. If you have a log­i­cal turn of mind, you are bound to ask whether this is a spe­cial case of a gen­eral eth­i­cal prin­ci­ple which says “Life is good, death is bad; health is good, sick­ness is bad.” If so – and here we en­ter into con­tro­ver­sial ter­ri­tory – we can fol­low this gen­eral prin­ci­ple to a sur­pris­ing new con­clu­sion: If a 95-year-old is threat­ened by death from old age, it would be good to drag them from those train tracks, if pos­si­ble. And if a 120-year-old is start­ing to feel slightly sickly, it would be good to re­store them to full vi­gor, if pos­si­ble. With cur­rent tech­nol­ogy it is not pos­si­ble. But if the tech­nol­ogy be­came available in some fu­ture year – given suffi­ciently ad­vanced med­i­cal nan­otech­nol­ogy, or such other con­trivances as fu­ture minds may de­vise – would you judge it a good thing, to save that life, and stay that de­bil­ity?

The im­por­tant thing to re­mem­ber, which I think all too many peo­ple for­get, is that it is not a trick ques­tion.

Tran­shu­man­ism is sim­pler – re­quires fewer bits to spec­ify – be­cause it has no spe­cial cases. If you be­lieve pro­fes­sional bioethi­cists (peo­ple who get paid to ex­plain eth­i­cal judg­ments) then the rule “Life is good, death is bad; health is good, sick­ness is bad” holds only un­til some crit­i­cal age, and then flips po­lar­ity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to ex­tend the life and health of a 150-year-old. Then at what ex­act age does the term in the util­ity func­tion go from pos­i­tive to nega­tive? Why?

As far as a tran­shu­man­ist is con­cerned, if you see some­one in dan­ger of dy­ing, you should save them; if you can im­prove some­one’s health, you should. There, you’re done. No spe­cial cases. You don’t have to ask any­one’s age.

You also don’t ask whether the rem­edy will in­volve only “prim­i­tive” tech­nolo­gies (like a stretcher to lift the six-year-old off the railroad tracks); or tech­nolo­gies in­vented less than a hun­dred years ago (like peni­cillin) which nonethe­less seem or­di­nary be­cause they were around when you were a kid; or tech­nolo­gies that seem scary and sexy and fu­tur­is­tic (like gene ther­apy) be­cause they were in­vented af­ter you turned 18; or tech­nolo­gies that seem ab­surd and im­plau­si­ble and sac­rile­gious (like nan­otech) be­cause they haven’t been in­vented yet. Your eth­i­cal dilemma re­port form doesn’t have a line where you write down the in­ven­tion year of the tech­nol­ogy. Can you save lives? Yes? Okay, go ahead. There, you’re done.

Sup­pose a boy of 9 years, who has tested at IQ 120 on the Wech­sler-Bel­lvue, is threat­ened by a lead-heavy en­vi­ron­ment or a brain dis­ease which will, if unchecked, grad­u­ally re­duce his IQ to 110. I re­ply that it is a good thing to save him from this threat. If you have a log­i­cal turn of mind, you are bound to ask whether this is a spe­cial case of a gen­eral eth­i­cal prin­ci­ple say­ing that in­tel­li­gence is pre­cious. Now the boy’s sister, as it hap­pens, cur­rently has an IQ of 110. If the tech­nol­ogy were available to grad­u­ally raise her IQ to 120, with­out nega­tive side effects, would you judge it good to do so?

Well, of course. Why not? It’s not a trick ques­tion. Either it’s bet­ter to have an IQ of 110 than 120, in which case we should strive to de­crease IQs of 120 to 110. Or it’s bet­ter to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if pos­si­ble. As far as I can see, the ob­vi­ous an­swer is the cor­rect one.

But – you ask – where does it end? It may seem well and good to talk about ex­tend­ing life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of prop­erly in­te­grat­ing all these new life ex­pe­riences and ex­pand­ing one’s mind ac­cord­ingly over time – the equiv­a­lent of IQ must go to 140, or 180, or be­yond hu­man ranges?

Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and hap­piness and fun and laugh­ter and challenge and learn­ing are good. This does not change for ar­bi­trar­ily large amounts of life and beauty. If there were an up­per bound, it would be a spe­cial case, and that would be in­el­e­gant.

Ul­ti­mate phys­i­cal limits may or may not per­mit a lifes­pan of at least length X for some X – just as the med­i­cal tech­nol­ogy of a par­tic­u­lar cen­tury may or may not per­mit it. But phys­i­cal limi­ta­tions are ques­tions of sim­ple fact, to be set­tled strictly by ex­per­i­ment. Tran­shu­man­ism, as a moral philos­o­phy, deals only with the ques­tion of whether a healthy lifes­pan of length X is de­sir­able if it is phys­i­cally pos­si­ble. Tran­shu­man­ism an­swers yes for all X. Be­cause, you see, it’s not a trick ques­tion.

So that is “tran­shu­man­ism” – lov­ing life with­out spe­cial ex­cep­tions and with­out up­per bound.

Can tran­shu­man­ism re­ally be that sim­ple? Doesn’t that make the philos­o­phy triv­ial, if it has no ex­tra in­gre­di­ents, just com­mon sense? Yes, in the same way that the sci­en­tific method is noth­ing but com­mon sense.

Then why have a com­pli­cated spe­cial name like “tran­shu­man­ism” ? For the same rea­son that “sci­en­tific method” or “sec­u­lar hu­man­ism” have com­pli­cated spe­cial names. If you take com­mon sense and rigor­ously ap­ply it, through mul­ti­ple in­fer­en­tial steps, to ar­eas out­side ev­ery­day ex­pe­rience, suc­cess­fully avoid­ing many pos­si­ble dis­trac­tions and tempt­ing mis­takes along the way, then it of­ten ends up as a minor­ity po­si­tion and peo­ple give it a spe­cial name.

But a moral philos­o­phy should not have spe­cial in­gre­di­ents. The pur­pose of a moral philos­o­phy is not to look delight­fully strange and coun­ter­in­tu­itive, or to provide em­ploy­ment to bioethi­cists. The pur­pose is to guide our choices to­ward life, health, beauty, hap­piness, fun, laugh­ter, challenge, and learn­ing. If the judg­ments are sim­ple, that is no black mark against them – moral­ity doesn’t always have to be com­pli­cated.

There is noth­ing in tran­shu­man­ism but the same com­mon sense that un­der­lies stan­dard hu­man­ism, rigor­ously ap­plied to cases out­side our mod­ern-day ex­pe­rience. A mil­lion-year lifes­pan? If it’s pos­si­ble, why not? The prospect may seem very for­eign and strange, rel­a­tive to our cur­rent ev­ery­day ex­pe­rience. It may cre­ate a sen­sa­tion of fu­ture shock. And yet – is life a bad thing?

Could the moral ques­tion re­ally be just that sim­ple?

Yes.