Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall

This is an ex­plo­ra­tory ar­ti­cle on the na­ture of emo­tions and how that re­lates to AI and qualia. I am not a pro­fes­sional AI or ML re­searcher, and I ap­proach the is­sue as a philoso­pher. I am here to learn. Re­but­tals and clar­ifi­ca­tions are strongly en­couraged.

Prior read­ing (or at least skim­ming).

https://​plato.stan­ford.edu/​en­tries/​emo­tion/​#ThreTradStudEmotEmotFeelEvalMoti

In the read­ing “Tra­di­tions in the Study of Emo­tions” and “Con­clud­ing Re­marks” are the most es­sen­tial. There we can see the fault line emerge be­tween what would have to be true for AI to have emo­tional ex­pe­rience. The emo­tions of hu­mans and so-called emo­tions of AI have more differ­ences than similar­i­ties.

--

The sig­nifi­cance of this prob­lem is that emo­tions, what­ever they are, are es­sen­tial to homo sapi­ens. I wanted to make the in­quiry with some­thing typ­i­cally as­so­ci­ated with hu­mans, emo­tions, to see if we could shed any more light on AI Safety re­search by ap­proach­ing from a differ­ent an­gle.

Hu­man emo­tions have six qual­ities to them:

“At first blush, we can dis­t­in­guish in the com­plex event that is fear an eval­u­a­tive com­po­nent (e.g., ap­prais­ing the bear as dan­ger­ous), a phys­iolog­i­cal com­po­nent (e.g., in­creased heart rate and blood pres­sure), a phe­nomenolog­i­cal com­po­nent (e.g., an un­pleas­ant feel­ing), an ex­pres­sive com­po­nent (e.g., up­per eye­lids raised, jaw dropped open, lips stretched hori­zon­tally), a be­hav­ioral com­po­nent (e.g., a ten­dency to flee), and a men­tal com­po­nent (e.g., fo­cus­ing at­ten­tion).” – SEP “Emo­tion” 2018

It is not clear which of these is prior to which and ex­actly how cor­re­lated they are. An AI would only ac­tu­ally “need” the eval­u­a­tive com­po­nent in or­der to do act well. Per­haps you could in­clude the be­hav­ioral com­po­nent for an AI as well if you con­sider it us­ing heuris­tics, how­ever a heuris­tic is a tool for eval­u­at­ing in­for­ma­tion while a “ten­dency to flee” re­sult­ing from fear does not re­quire any amount of re­flec­tion, only in­stinct. You might ob­ject that “it all takes place in the brain” and there­fore is eval­u­a­tive. But the brain is not a sin­gle pro­cess­ing sys­tem. The “pri­mal” and in­stinc­tual parts of the brain (the amyg­dala) scream out to flee, but fear is not ev­i­dence in the same way that de­liber­a­tion pro­vides ev­i­dence. What we call ra­tio­nal­ity con­cerns be­com­ing bet­ter at over­rid­ing the emo­tions in fa­vor of heuris­tics, ab­strac­tion, and ex­plicit rea­son­ing.

An AI would need eval­u­a­tive judg­ment, but it would not need a phe­nomenolog­i­cal com­po­nent in or­der to mo­ti­vate be­hav­ior, nor would it need be­hav­ioral ten­den­cies which pre­cede and some­times jump­start ra­tio­nal pro­cess­ing. It’s the phe­nomenolog­i­cal com­po­nent where qualia/​con­scious­ness would come in. It seems against the spirit of Oc­cam’s Ra­zor to say that be­cause a ma­chine can suc­cess­fully imi­tate a feel­ing it has the feel­ing (as­sum­ing the feel­ing is a dis­tinct event from the imi­ta­tion of it.) (No­tice I use the word feel­ing which in­di­cates a sub­jec­tive qual­i­ta­tive ex­pe­rience.) Of course, how could we know? The ob­vi­ous fact is that we don’t have ac­cess to the qual­i­ta­tive ex­pe­rience of oth­ers. In­duc­tion from both my own ex­pe­rience and the study of biol­ogy/​ evolu­tion tells me that hu­mans and many an­i­mals have qual­i­ta­tive ex­pe­rience. I could go into more de­tail here if needed.

Us­ing the same in­duc­tive pro­cess that al­lows me to con­sider fel­low hu­mans and my dog con­scious, I may in­duce that an AI would not be con­scious. I know that an AI passes in­te­gers to non-lin­ear equa­tions to perform calcu­la­tions. As the calcu­la­tions be­come more com­plex (usu­ally thanks to ad­di­tional com­put­ing power and nodes) and the or­der­ing of the al­gorithms (the frame­work) be­come more so­phis­ti­cated, ever more in­puts can be eval­u­ated and mean­ingful (to us) out­puts can be out­putted. At no point are phys­iolog­i­cal, phe­nomenolog­i­cal, or ex­pres­sive com­po­nents needed in or­der to mo­ti­vate the pro­cess and move it along. If ad­di­tional com­po­nents are not needed, why posit they will de­velop?

If there are no emo­tions as mo­ti­va­tions or feel­ings for an AI, an AGI should still be fully ca­pa­ble of do­ing any­thing and fool­ing any­one AND hav­ing hor­rific al­ign­ment prob­lems, BUT it won’t have feel­ings.

How­ever, if for some rea­son emo­tions are pri­mar­ily eval­u­a­tive, then we might ex­pect emo­tion as mo­ti­va­tion AND emo­tion as a feel­ing to emerge as a later con­se­quence in AI. An in­ter­est­ing con­se­quence of this view will be that it will be hardly pos­si­ble to al­ign an AGI. Here’s why: Imag­ine hu­man brains as pri­mar­ily eval­u­a­tive ma­chines. They are not that differ­ent from higher apes’. In fact, the biggest differ­ence is that we can use com­plex lan­guage to co­or­di­nate and pass on dis­cov­er­ies to the next gen­er­a­tion. How­ever, our in­di­vi­d­ual eval­u­a­tive po­ten­tial is ex­tremely limited. Some peo­ple can do 1 differ­en­tial equa­tion in their head with­out pa­per. They are rare. In any case our mo­ti­va­tions and con­scious­ness is built on ex­tremely weak eval­u­a­tive power com­pared to even pre­sent ar­tifi­cial sys­tems. The com­plex­ity of the mo­ti­va­tions and con­scious­ness that would emerge from such a fu­ture AGI would be as far be­yond our com­pre­hen­sion as our minds are be­yond a parame­cium.

Sum­mary of my thoughts:

Premises

a. Emo­tions are pri­mar­ily ei­ther eval­u­a­tions, feel­ings, or mo­ti­va­tions.

b. Eval­u­a­tions are power and are in­put/​out­put re­lated. Mo­ti­va­tions are in­stinc­tual. Feel­ings are qual­i­ta­tive, born out of con­scious­ness, and prob­a­bly be­gan much later in evolu­tion­ary his­tory, al­though some philoso­phers think even some very sim­ple crea­tures have rudi­men­tary sub­jec­tive ex­pe­rience.

c. Car­bon-based life forms which evolve into hu­mans start with phys­iolog­i­cal and be­hav­ioral mo­ti­va­tions, then much later de­velop ex­pres­sive, men­tal, and an eval­u­a­tive com­po­nents. Some­where in there the phe­nomenolog­i­cal com­po­nent de­vel­ops.

d. Com­put­ers and cur­rent AI eval­u­ate with­out feel­ing or mo­ti­va­tion.

The ques­tion: can an AI have feel­ings?

1. If emo­tions are not pri­mar­ily based upon eval­u­a­tions and eval­u­a­tions do not cause con­scious­ness,

2. Then eval­u­a­tions of any com­plex­ity can ex­ist with­out feel­ings,

3. And there is no AI con­scious­ness.

OR

1. If emo­tions are based upon eval­u­a­tions and eval­u­a­tions of some as yet un­known type are the cause of con­scious­ness,

2. Then eval­u­a­tions of some com­plex­ity will cause feel­ings and mo­ti­va­tions,

3. And given enough vari­a­tions, there will be at least one AI con­scious­ness.

LASTLY

1. If emo­tions are based upon eval­u­a­tions, but eval­u­a­tions which pro­duce mo­ti­va­tions and feel­ing re­quire a brain with strange hi­er­ar­chies caused by the differ­ences in re­ac­tion of the parts of the brain,

2. Then those strange hi­er­ar­chies are imitable, but this needs to be puz­zled out more...

FUTURE INVESTIGATION

For fu­ture in­ves­ti­ga­tion I want to chart out the differ­ences among types of brains, differ­ences be­tween types of ANN. One thing about ML which I find to be bru­tal is we are con­stantly de­scribing the new thing in terms of the old thing. It’s very difficult to tell where our de­scrip­tors mis­lead­ing us.