Should ethicists be inside or outside a profession?

Origi­nally writ­ten in 2007.


Marvin Min­sky in an in­ter­view with Danielle Egan for New Scien­tist:

Min­sky: The rea­son we have poli­ti­ci­ans is to pre­vent bad things from hap­pen­ing. It doesn’t make sense to ask a sci­en­tist to worry about the bad effects of their dis­cov­er­ies, be­cause they’re no bet­ter at that than any­one else. Scien­tists are not par­tic­u­larly good at so­cial policy.
Egan: But shouldn’t they have an eth­i­cal re­spon­si­bil­ity for their inventions
Min­sky: No they shouldn’t have an eth­i­cal re­spon­si­bil­ity for their in­ven­tions. They should be able to do what they want. You shouldn’t have to ask them to have the same val­ues as other peo­ple. Be­cause then you won’t get them. They’ll make stupid de­ci­sions and not work on im­por­tant things, be­cause they see pos­si­ble dan­gers. What you need is a sep­a­ra­tion of pow­ers. It doesn’t make any sense to have the same per­son do both.

The Sin­gu­lar­ity In­sti­tute was re­cently asked to com­ment on this in­ter­view—which by the time it made it through the ed­i­tors at New Scien­tist, con­tained just the un­var­nished quote “Scien­tists shouldn’t have an eth­i­cal re­spon­si­bil­ity for their in­ven­tions. They should be able to do what they want. You shouldn’t have to ask them to have the same val­ues as other peo­ple.” Nice one, New Scien­tist. Thanks to Egan for pro­vid­ing the origi­nal in­ter­view text.

This makes an in­ter­est­ing con­trast with what I said in my “Cog­ni­tive bi­ases” chap­ter for Bostrom’s Global Catas­trophic Risks:

Some­one on the physics-dis­aster com­mit­tee should know what the term “ex­is­ten­tial risk” means; should pos­sess what­ever skills the field of ex­is­ten­tial risk man­age­ment has ac­cu­mu­lated or bor­rowed. For max­i­mum safety, that per­son should also be a physi­cist. The do­main-spe­cific ex­per­tise and the ex­per­tise per­tain­ing to ex­is­ten­tial risks should com­bine in one per­son. I am skep­ti­cal that a scholar of heuris­tics and bi­ases, un­able to read physics equa­tions, could check the work of physi­cists who knew noth­ing of heuris­tics and bi­ases.

Should ethi­cists be in­side or out­side a pro­fes­sion?

It seems to me that try­ing to sep­a­rate ethics and en­g­ineer­ing is like try­ing to sep­a­rate the craft­ing of paint­ings into two in­de­pen­dent spe­cialties: a pro­fes­sion that’s in charge of push­ing a paint­brush over a can­vas, and a pro­fes­sion that’s in charge of artis­tic beauty but knows noth­ing about paint or op­tics.

The view of ethics as a sep­a­rate pro­fes­sion is part of the prob­lem. It arises, I think, from the same deeply flawed wor­ld­view that sees tech­nol­ogy as some­thing for­eign and dis­tant, some­thing op­posed to life and beauty. Tech­nol­ogy is an ex­pres­sion of hu­man in­tel­li­gence, which is to say, an ex­pres­sion of hu­man na­ture. Hunter-gath­er­ers who crafted their own bows and ar­rows didn’t have cul­tural night­mares about bows and ar­rows be­ing a me­chan­i­cal death force, a blank-faced Sys­tem. When you craft some­thing with your own hands, it seems like a part of you. It’s the In­dus­trial Revolu­tion that en­abled peo­ple to buy ar­ti­facts which they could not make or did not even un­der­stand.

Ethics, like en­g­ineer­ing and art and math­e­mat­ics, is a nat­u­ral ex­pres­sion of hu­man minds.

Any­one who gives a part of them­selves to a pro­fes­sion dis­cov­ers a sense of beauty in it. Writ­ers dis­cover that sen­tences can be beau­tiful. Pro­gram­mers dis­cover that code can be beau­tiful. Ar­chi­tects dis­cover that house lay­outs can be beau­tiful. We all start out with a na­tive sense of beauty, which already re­sponds to rivers and flow­ers. But as we be­gin to cre­ate—sen­tences or code or house lay­outs or flint knives—our sense of beauty de­vel­ops with use.

Like a sense of beauty, one’s na­tive eth­i­cal sense must be con­tinu­ally used in or­der to de­velop fur­ther. If you’re just work­ing at a job to make money, so that your real goal is to make the rent on your apart­ment, then nei­ther your aes­thet­ics nor your morals are likely to get much of a work­out.

The way to de­velop a highly spe­cial­ized sense of pro­fes­sional ethics is to do some­thing, eth­i­cally, a whole bunch, un­til you get good at both the thing it­self and the ethics part.

When you look at the “bioethics” fi­asco, you dis­cover bioethi­cists writ­ing mainly for an au­di­ence of other bioethi­cists. Bioethi­cists aren’t writ­ing to doc­tors or bio­eng­ineers, they’re writ­ing to tenure com­mit­tees and jour­nal­ists and foun­da­tion di­rec­tors. Worse, bioethi­cists are not us­ing their eth­i­cal sense in bio-work, the way a doc­tor whose pa­tient might have in­cur­able can­cer must choose how and what to tell the pa­tient.

A doc­tor treat­ing a pa­tient should not try to be aca­dem­i­cally origi­nal, to come up with a brilli­ant new the­ory of bioethics. As I’ve writ­ten be­fore, ethics is not sup­posed to be coun­ter­in­tu­itive, and yet aca­demic ethi­cists are bi­ased to be just ex­actly coun­ter­in­tu­itive enough that peo­ple won’t say, “Hey, I could have thought of that.” The pur­pose of ethics is to shape a well-lived life, not to be im­pres­sively com­pli­cated. Pro­fes­sional ethi­cists, to get paid, must trans­form ethics into some­thing difficult enough to re­quire pro­fes­sional ethi­cists.

It’s, like, a good idea to save lives? “Duh,” the foun­da­tion di­rec­tors and the re­view boards and the tenure com­mit­tee would say.

But there’s noth­ing duh about sav­ing lives if you’re a doc­tor.

A book I once read about writ­ing—I for­get which one, alas—ob­served that there is a level of depth be­neath which rep­e­ti­tion ceases to be bor­ing. Stan­dard­ized phrases are called “cliches” (said the au­thor of writ­ing), but mur­der and love and re­venge can be wo­ven into a thou­sand plots with­out ever be­com­ing old. “You should save peo­ple’s lives, mmkay?” won’t get you tenure—but as a theme of real life, it’s as old as think­ing, and no more ob­so­lete.

Bor­ingly ob­vi­ous ethics are just fine if you’re us­ing them in your work rather than talk­ing about them. The goal is to do it right, not to do it origi­nally. Do your best whether or not it is “origi­nal”, and origi­nal­ity comes in its own time; not ev­ery change is an im­prove­ment, but ev­ery im­prove­ment is nec­es­sar­ily a change.

At the Sin­gu­lar­ity Sum­mit 2007, sev­eral speak­ers alleged we should “reach out” to artists and po­ets to en­courage their par­ti­ci­pa­tion in the Sin­gu­lar­ity di­alogue. And then a woman went to a micro­phone and said: “I am an artist. I want to par­ti­ci­pate. What should I do?”

And there was a long, deli­cious silence.

What I would have said to a ques­tion like that, if some­one had asked it of me in the con­fer­ence lobby, was: “You are not an ‘artist’, you are a hu­man be­ing; art is only one facet in which you ex­press your hu­man­ity. Your re­ac­tions to the Sin­gu­lar­ity should arise from your en­tire self, and it’s okay if you have a stan­dard hu­man re­ac­tion like ‘I’m afraid’ or ‘Where do I send the check?’, rather than some spe­cial ‘artist’ re­ac­tion. If your artistry has some­thing to say, it will ex­press it­self nat­u­rally in your re­sponse as a hu­man be­ing, with­out need­ing a con­scious effort to say some­thing artist-like. I would feel pa­tron­ized, like a dog com­manded to perform a trick, if some­one pre­sented me with a paint­ing and said ‘Say some­thing math­e­mat­i­cal!’”

Any­one who calls on “artists” to par­ti­ci­pate in the Sin­gu­lar­ity clearly thinks of artistry as a spe­cial func­tion that is only performed in Art de­part­ments, an ic­ing dumped onto cake from out­side. But you can always pick up some cheap ap­plause by call­ing for more ic­ing on the cake.

Ethi­cists should be in­side a pro­fes­sion, rather than out­side, be­cause ethics it­self should be in­side rather than out­side. It should be a nat­u­ral ex­pres­sion of your­self, like math or art or en­g­ineer­ing. If you don’t like trudg­ing up and down stairs you’ll build an es­ca­la­tor. If you don’t want peo­ple to get hurt, you’ll try to make sure the es­ca­la­tor doesn’t sud­denly speed up and throw its rid­ers into the ceiling. Both just nat­u­ral ex­pres­sions of de­sire.

There are op­por­tu­ni­ties for mar­ket dis­tor­tions here, where peo­ple get paid more for in­stal­ling an es­ca­la­tor than in­stal­ling a safe es­ca­la­tor. If you don’t use your ethics, if you don’t wield them as part of your pro­fes­sion, they will grow no stronger. But if you want a safe es­ca­la­tor, by far the best way to get one—if you can man­age it—is to find an en­g­ineer who nat­u­rally doesn’t want to hurt peo­ple. Then you’ve just got to keep the man­agers from de­mand­ing that the es­ca­la­tor ship im­me­di­ately and with­out all those ex­pen­sive safety gad­gets.

The first iron-clad steamships were ac­tu­ally much safer than the Ti­tanic; the first iron­clads were built by en­g­ineers with­out much man­age­ment su­per­vi­sion, who could de­sign in safety fea­tures to their heart’s con­tent. The Ti­tanic was built in an era of cut­throat price com­pe­ti­tion be­tween ocean lin­ers. The grand fan­fare about it be­ing un­sink­able was a mar­ket­ing slo­gan like “World’s Great­est Laun­dry Deter­gent”, not a failure of en­g­ineer­ing pre­dic­tion.

Yes, safety in­spec­tors, yes, de­sign re­views; but these just ver­ify that the en­g­ineer put forth an effort of eth­i­cal de­sign in­tel­li­gence. Safety-in­spect­ing doesn’t build an ele­va­tor. Ethics, to be effec­tive, must be part of the in­tel­li­gence that ex­presses those ethics—you can’t add it in like ic­ing on a cake.

Which leads into the ques­tion of the ethics of AI. “Ethics, to be effec­tive, must be part of the in­tel­li­gence that ex­presses those ethics—you can’t add it in like ic­ing on a cake.” My good­ness, I won­der how I could have learned such Deep Wis­dom?

Be­cause I stud­ied AI, and the art spoke to me. Then I trans­lated it back into English.

The truth is that I can’t in­veigh prop­erly on bioethics, be­cause I am not my­self a doc­tor or a bio­eng­ineer. If there is a spe­cial ethic of medicine, be­yond the ob­vi­ous, I do not know it. I have not worked enough heal­ing for that art to speak to me.

What I do know a thing or two about, is AI. There I can tes­tify definitely and from di­rect knowl­edge, that any­one who sets out to study “AI ethics” with­out a tech­ni­cal grasp of cog­ni­tive sci­ence, is ab­solutely doomed.

It’s the tech­ni­cal knowl­edge of AI that forces you to deal with the world in its own strange terms, rather than the sur­face-level con­cepts of ev­ery­day life. In ev­ery­day life, you can take for granted that “peo­ple” are easy to iden­tify; if you look at the mod­ern world, the hu­mans are easy to pick out, to cat­e­go­rize. An un­usual bound­ary case, like Terri Schi­avo, can throw a whole na­tion into a panic: Is she “al­ive” or “dead”? AI ex­plodes the lan­guage that peo­ple are de­scribed of, un­bun­dles the prop­er­ties that are always to­gether in hu­man be­ings. Los­ing the stan­dard view, throw­ing away the hu­man con­cep­tual lan­guage, forces you to think for your­self about ethics, rather than par­rot­ing back things that sound Deeply Wise.

All of this comes of study­ing the math, nor may it be di­vorced from the math. That’s not as com­fortably egal­i­tar­ian as my ear­lier state­ment that ethics isn’t meant to be com­pli­cated. But if you mate ethics to a highly tech­ni­cal pro­fes­sion, you’re go­ing to get ethics ex­pressed in a con­cep­tual lan­guage that is highly tech­ni­cal.

The tech­ni­cal knowl­edge pro­vides the con­cep­tual lan­guage in which to ex­press eth­i­cal prob­lems, eth­i­cal op­tions, eth­i­cal de­ci­sions. If poli­ti­ci­ans don’t un­der­stand the dis­tinc­tion be­tween ter­mi­nal value and in­stru­men­tal value, or the differ­ence be­tween a util­ity func­tion and a prob­a­bil­ity dis­tri­bu­tion, then some fun­da­men­tal prob­lems in Friendly AI are go­ing to be com­plete gib­ber­ish to them—never mind the solu­tions. I’m sorry to be the one to say this, and I don’t like it ei­ther, but Lady Real­ity does not have the goal of mak­ing things easy for poli­ti­cal ideal­ists.

If it helps, the tech­ni­cal eth­i­cal thoughts I’ve had so far re­quire only com­par­a­tively ba­sic math like Bayesian de­ci­sion the­ory, not high-falutin’ com­pli­cated damn math like real math­e­mat­i­ci­ans do all day. Hope­fully this con­di­tion does not hold merely be­cause I am stupid.

Sev­eral of the re­sponses to Min­sky’s state­ment that poli­ti­ci­ans should be the ones to “pre­vent bad things from hap­pen­ing” were along the lines of “Poli­ti­cans are not par­tic­u­larly good at this, but nei­ther nec­es­sar­ily are most sci­en­tists.” I think it’s sad but true that mod­ern in­dus­trial civ­i­liza­tion, or even mod­ern academia, im­poses many shout­ing ex­ter­nal de­mands within which the quieter in­ter­nal voice of ethics is lost. It may even be that a ma­jor­ity of peo­ple are not par­tic­u­larly eth­i­cal to be­gin with; the thought seems to me un­com­fortably elitist, but that doesn’t make it com­fortably un­true.

It may even be true that most sci­en­tists, say in AI, haven’t re­ally had a lot of op­por­tu­nity to ex­press their ethics and so the art hasn’t said any­thing in par­tic­u­lar to them.

If you talk to some AI sci­en­tists about the Sin­gu­lar­ity /​ In­tel­li­gence Ex­plo­sion they may say some­thing cached like, “Well, who’s to say that hu­man­ity re­ally ought to sur­vive?” This doesn’t sound to me like some­one whose art is speak­ing to them. But then ar­tifi­cial in­tel­li­gence is not the same as ar­tifi­cial gen­eral in­tel­li­gence; and, well, to be bru­tally hon­est, I think a lot of peo­ple who claim to be work­ing in AGI haven’t re­ally got­ten all that far in their pur­suit of the art.

So, if I listen to the voice of ex­pe­rience, rather to the voice of com­fort, I find that most peo­ple are not very good at eth­i­cal think­ing. Even most doc­tors—who ought prop­erly to be con­fronting eth­i­cal ques­tions in ev­ery day of their work—don’t go on to write fa­mous mem­o­irs about their eth­i­cal in­sights. The ter­rify­ing truth may be that Stur­geon’s Law ap­plies to ethics as it ap­plies to so many other hu­man en­deav­ors: “Ninety per­cent of ev­ery­thing is crap.”

So ask­ing an en­g­ineer an eth­i­cal ques­tion is not a sure-fire way to get an es­pe­cially eth­i­cal an­swer. I wish it were true, but it isn’t.

But what ex­pe­rience tells me, is that there is no way to ob­tain the ethics of a tech­ni­cal pro­fes­sion ex­cept by be­ing eth­i­cal in­side that pro­fes­sion. I’m skep­ti­cal enough of non­doc­tors who pro­pose to tell doc­tors how to be eth­i­cal, but I know it’s not pos­si­ble in AI. There are all sorts of AI-eth­i­cal ques­tions that any­one should be able to an­swer, like “Is it good for a robot to kill peo­ple? No.” But if a dilemma re­quires more than this, the spe­cial­ist eth­i­cal ex­per­tise will only come from some­one who has prac­ticed ex­press­ing their ethics from in­side their pro­fes­sion.

This doesn’t mean that all AI peo­ple are on their own. It means that if you want to have spe­cial­ists tel­ling AI peo­ple how to be eth­i­cal, the “spe­cial­ists” have to be AI peo­ple who ex­press their ethics within their AI work, and then they can talk to other AI peo­ple about what the art said to them.

It may be that most AI peo­ple will not be above-av­er­age at AI ethics, but with­out tech­ni­cal knowl­edge of AI you don’t even get an op­por­tu­nity to de­velop eth­i­cal ex­per­tise be­cause you’re not think­ing in the right lan­guage. That’s the way it is in my pro­fes­sion. Your mileage may vary.

In other words: To get good AI ethics you need some­one tech­ni­cally good at AI, but not all peo­ple tech­ni­cally good at AI are au­to­mat­i­cally good at AI ethics. The tech­ni­cal knowl­edge is nec­es­sary but not suffi­cient to ethics.

What if you think there are spe­cial­ized eth­i­cal con­cepts, typ­i­cally taught in philos­o­phy classes, which AI ethi­cists will need? Then you need to make sure that at least some AI peo­ple take those philos­o­phy classes. If there is such a thing as spe­cial eth­i­cal knowl­edge, it has to com­bine in the same per­son who has the tech­ni­cal knowl­edge.

Heuris­tics and bi­ases are crit­i­cally im­por­tant knowl­edge rele­vant to ethics, in my hum­ble opinion. But if you want that knowl­edge ex­pressed in a pro­fes­sion, you’ll have to find a pro­fes­sional ex­press­ing their ethics and teach them about heuris­tics and bi­ases—not pick a ran­dom cog­ni­tive psy­chol­o­gist off the street to add su­per­vi­sion, like so much ic­ing slathered over a cake.

My night­mare here is peo­ple say­ing, “Aha! A ran­domly se­lected AI re­searcher is not guaran­teed to be eth­i­cal!” So they turn the task over to pro­fes­sional “ethi­cists” who are guaran­teed to fail: who will si­mul­ta­neously try to sound coun­ter­in­tu­itive enough to be worth pay­ing for as spe­cial­ists, while also mak­ing sure to not think up any­thing re­ally tech­ni­cal that would scare off the foun­da­tion di­rec­tors who ap­prove their grants.

But even if pro­fes­sional “AI ethi­cists” fill the pop­u­lar air with non­sense, all is not lost. AIfolk who ex­press their ethics as a con­tin­u­ous, non-sep­a­rate, non-spe­cial func­tion of the same life-ex­is­tence that ex­presses their AI work, will yet learn a thing or two about the spe­cial ethics per­tain­ing to AI. They will not be able to avoid it. Think­ing that ethics is a sep­a­rate pro­fes­sion which judges en­g­ineers from above, is like think­ing that math is a sep­a­rate pro­fes­sion which judges en­g­ineers from above. If you’re do­ing ethics right, you can’t sep­a­rate it from your pro­fes­sion.