When does technological enhancement feel natural and acceptable?

Tech­nol­ogy can be used and per­ceived in differ­ent ways. Fu­ture tech­nol­ogy may change our lives be­yond imag­i­na­tion. How can friendly AI tech­nol­ogy en­rich hu­man ex­pe­rience in a pos­i­tive way? Tech­nol­ogy can feel like it con­trols us or—if it goes well—it can feel like a nat­u­ral en­hance­ment of mind and body.

I’m in­ter­ested in ways fu­ture tech­nol­ogy could or couldn’t do this. I will ex­plore some av­enues and state my opinion on these. Make up your mind. I’d like to hear your opinion.

Body Enhancement

First thing we’d like to get rid of im­ped­i­ments and dis­eases. Some might like to be im­mor­tal. But this is not en­hance­ment but rather main­te­nance.

Peo­ple ap­par­ently like to en­hance their bod­ies. This starts with cos­met­ics and doesn’t end with dop­ing. Strictly cloth­ing could also count among this. We know quite well what we want here. I’d bet that peo­ple would ac­cept body en­hance­ments eas­ily—esp. if it is re­li­able, safe and/​or re­versible. Fic­tional ev­i­dence here is the pos­i­tive re­cep­tion of suit­ably en­hanced heros. Wouldn’t you like to have su­per strength or look like a su­per­model? Such en­hance­ment for ev­ery­body which is oth­er­wise zero-sum could also bal­ance against the effect that ideals in the me­dia diminish our self-image com­pared to the an­ces­tral en­vi­ron­ment where we were only one in 150 av­er­age guys.

As long as peo­ple don’t change their na­tive prefer­rences this is should make ev­ery­body more happy with them­selves. If prefer­rences are changed bets are off again.

Mind Enhancement

Drugs can have not only plea­surable but also perfor­mance in­creas­ing effects. Nootrop­ics for ev­ery­body could be ac­cept­able—if free and safe. Ac­tu­ally I think that in­creas­ing brain ca­pac­ity (speed and ca­pac­ity) would feel the most nat­u­ral—if it could be done.

Trou­ble with this is the in­evitable re­turn on cog­ni­tive in­vest­ment. Seems like ei­ther ex­po­nen­tial or chaotic changes (from in­ter­act­ing minds) re­sult. One tricky part here seems to be how to avoid bore­dom once sta­bil­ity has reached.

Body Schema Extension

Body per­cep­tion is flex­ible. It is known that the body schema (our body self image) can ex­pand to en­com­pass tools. Thus tools be­come part of the body (schema) and are wielded and han­dled and thus felt like ones own body. I got the im­pres­sion that this might ex­tend to ve­hi­cles. E.g. driv­ing a car—or prob­a­bly also fly­ing a plane—can feel like ones own move­ment. One knows where the car ends. I’d guess that tech­nol­ogy that has im­me­di­ate feed­back and can be mapped to a (dis­torted/​ex­tended) body schema will likely feel nat­u­ral (af­ter some time of ad­just­ment).

Sen­sory Enhancement

Ap­par­ently our senses are quite flex­ible. What­ever in­put (vi­sual, au­di­tory, tac­tile, even smell) can be mapped to a 3D en­vi­ron­ment model by—ad­mit­tedly long—train­ing. This is ap­par­ently also pos­si­ble for non-na­tive senses which is called Sen­sory Sub­sti­tu­tion and Sen­sory Aug­men­ta­tion. There are already some pro­jects which build ac­tual work­ing de­vices. Once this map­ping has set­tled in the sub­con­scious it feels nat­u­ral. I won­der whether aug­mented re­al­ity sys­tems can achieve this. Vir­tual re­al­ity sys­tems are a dual for this—data mapped to the senses in­stead of senses mapped to data.

De­vices and Gadgets

De­vices that re­quire con­scious in­ter­ac­tion and trans­la­tion into some UI of­ten feel clumsy no are clumsy. They break flow. They re­quire con­scious effort. I think the main at­trac­tion of hav­ing an app for that is the feel­ing of con­trol over the dis­tance we gain. We can some­thing by in­voc­ing some mag­i­cal rit­ual to achieve some effect other peo­ple can’t achieve (or rather only via mun­dane man­ual ac­tion). Well this is good and fine but even bet­ter if you can achieve the effect even with­out the in­ter­ac­tion.

There was a re­cent post some­where about the best smart­phone UI be­ing just a blank screen where you could type (or dic­tate) what you want the the ‘UI’ would figure out the con­text and in­ten­tion. While googling un­suc­cess­fully for that I found in a very rele­vant link about nat­u­ral UIs this:

“The real prob­lem with the in­ter­face is that it is an in­ter­face. In­ter­faces get in the way. I don’t want to fo­cus my en­er­gies on an in­ter­face. I want to fo­cus on the job…I don’t want to think of my­self as us­ing a com­puter, I want to think of my­self as do­ing my job.”—Don­ald Nor­man in 1990

Services

Com­merce and esp. the in­ter­net provide lots of ser­vices which we use to reap the benefits of digi­tal so­ciety. Ama­zon, Net­flix, on­line book­ing… But we are at the mercy of the ser­vice providers and the power of the in­ter­face pro­vided and the cost re­quired. In­de­pen­dent of how well-in­te­grated this is ev­ery in­ter­ac­tion with a ser­vice ei­ther means a trans­ac­tion (cost per use), freemium choice (will I re­gret this later) or ad suffer­ing (pay­ing with at­ten­tion). This is bondage and dis­ci­pline. I’d rather min­i­mize this as a means of fu­ture tech­nolog­i­cal en­rich­ment.

Lan­guage Control

Com­mu­ni­ca­tion is nat­u­ral—via speech and via text. Com­mu­ni­ca­tion is nat­u­ral not only with peo­ple. Most pro­gram­mers value the power of the com­mand lines—be­cause it al­lows to com­bine com­mands in new ways in a—to an ex­pe­rienced user—nat­u­ral lin­guis­tic way. Why not use lan­guage to con­trol the tech­nol­ogy of the fu­ture. Just ut­ter your wishes. A taste of this could be the ser­vice offered by the Magic startup.

So­cial Interaction

We are so­cial an­i­mals. Could we deal with digi­tal as­sis­tants who un­der­stand us and sup­port us? Prob­a­bly—if they are be­yond the Un­canny Valley. But would we trust them. Only if they be­have con­sis­tent with equal or lower sta­tus. Other­wise we’d jus­tifi­ably feel dom­i­nated. Can this be achieved if the ar­tifi­cial agent is much smarter than we and un­avoid­ably con­trols us thereby? Would we feel man­pu­lated?

Slow Processes

So­cietal pro­cesses af­fect­ing us in ways we (feel we) have only limited con­trol over of­ten feel op­pre­sive—even if they are by some ob­jec­tive stan­dard in­tended for our (col­lec­tive) best. Ex­am­ples are health care bu­reau­cracy, com­pul­sory ed­u­ca­tion, traf­fic rules and above all par­la­men­tary democ­racy. Th­ese pro­cesses are slow in the sense of af­fect­ing and chang­ing over longer times than con­scious effort can eas­ily work on. Often there is no im­me­di­ate or clearly at­tributable feed­back. Such a pro­cess of­ten feel like a force of na­ture and hu­mans have adapted quite well to forces of na­ture—but just be­cause we ac­cept it doesn’t mean that we feel liber­ated by it. I think that any slow pro­cess that changes things in com­plex ways we can­not fol­low will cause nega­tive feel­ings. And many con­cep­tions of how FAI could help us I have seen in­volve mas­ter­minds set­ting things up. Peo­ple might feel ma­nipu­lated. Either this is bal­anced by other means or we need a cor­re­spond­ingly slow con­scious­ness or seep un­der­stand­ing to fol­low this.

My Tran­shu­man Wish-List

I’d like to look bet­ter, be more ro­bust—even if ev­ery­body else would look bet­ter too. I want back­ups and au­tonomous real and vir­tual clones of my­self.

I’d like to think faster, have perfect mem­ory or even ac­cess to in­for­ma­tion from the web in a way that feels like re­call. I’d like to be able to push con­scious though pro­cesses into the sub­con­scious—call it de­liber­ate effi­cient re­versible habit for­ma­tion.

I’d like to be able to move into ma­chines (ve­hices, robots, even build­ings) and feel the ma­chines as my ex­tended self. I’d like to per­ceive more kinds of sen­sor in­put as nat­u­ral as my cur­rent senses.

I don’t want to in­ter­face with de­vices but com­mand lin­guis­ti­cally, by thought or com­pletely sub­con­sciously.

I want a con­scious­ness that can deal with slow pro­cesses. Pos­si­bly a way to think slower in par­allel with nor­mal con­scious­ness.

Open Ends

There are more ar­eas where this rea­son­ing can be ap­plied and I’d like to state some gen­eral pat­terns be­hind these ar­eas—but my time for this post has run up.

Just two ex­am­ples:

  • In­cre­men­tal changes are preferrable to abrupt changes. Peo­ple op­pose changes for which they can­not see the con­se­quences. But com­pare to slow ex­ter­nal pro­cesses. Slow in­ter­nal pro­cesses may be the best op­tion.

  • En­hance­ments that can be used sub­con­sciously are bet­ter than those that need con­scious at­ten­tion (and con­text switches).

I’d like to give fic­tional ev­i­dence for each point. But here I also just point you to the Op­ti­mal­verse where some of these are played out and to The Cul­ture which de­scribes some of the effects.