The mystery of pain and pleasure

Some ar­range­ments of par­ti­cles feel bet­ter than oth­ers. Why?

We have no gen­eral the­o­ries, only de­scrip­tive ob­ser­va­tions within the con­text of the ver­te­brate brain, about what pro­duces pain and plea­sure. It seems like there’s a mys­tery here, a gen­eral prin­ci­ple to un­cover.

Let’s try to chart the mys­tery. I think we should, in the­ory, be able to an­swer the fol­low­ing ques­tions:

(1) What are the nec­es­sary and suffi­cient prop­er­ties for a thought to be plea­surable?

(2) What are the char­ac­ter­is­tic math­e­mat­ics of a painful thought?

(3) If we wanted to cre­ate an ar­tifi­cial neu­ral net­work-based mind (i.e., us­ing neu­rons, but not slav­ishly pat­terned af­ter a mam­malian brain) that could ex­pe­rience bliss, what would the im­por­tant de­sign pa­ram­e­ters be?

(4) If we wanted to cre­ate an AGI whose nom­i­nal re­ward sig­nal co­in­cided with visceral hap­piness—how would we do that?

(5) If we wanted to en­sure an up­loaded mind could feel visceral plea­sure of the same kind a non-up­loaded mind can, how could we check that?

(6) If we wanted to fill the uni­verse with com­pu­tro­n­ium and max­i­mize he­dons, what al­gorithm would we run on it?

(7) If we met an alien life-form, how could we tell if it was suffer­ing?

It seems to me these are all em­piri­cal ques­tions that should have em­piri­cal an­swers. But we don’t seem to have much for hand-holds which can give us a start­ing point.

Where would *you* start on an­swer­ing these ques­tions? Which ones are good ques­tions, and which ones are aren’t? And if you think cer­tain ques­tions aren’t good, could you offer some you think are?

As sug­gested by shminux, here’s some re­search I be­lieve is in­dica­tive of the state of the liter­a­ture (though this falls quite short of a full liter­a­ture re­view):

Tononi’s IIT seems rele­vant, though it only ad­dresses con­scious­ness and ex­plic­itly avoids valence. Max Teg­mark has a for­mal gen­er­al­iza­tion of IIT which he claims should ap­ply to non-neu­ral sub­strates. And al­though Teg­mark doesn’t ad­dress valence ei­ther, he posted a re­cent pa­per on arxiv not­ing that there *is* a mys­tery here, and that it seems top­i­cal for FAI re­search.

Cur­rent mod­els of emo­tion based on brain ar­chi­tec­ture and neu­ro­chem­i­cals (e.g., EMOCON) are some­what rele­vant, though ul­ti­mately cor­rel­a­tive or merely de­scrip­tive, and seem to have lit­tle uni­ver­sal­iza­tion po­ten­tial.

There’s also a great deal of qual­ity liter­a­ture about spe­cific cor­re­lates of pain and hap­piness- e.g., Build­ing a neu­ro­science of plea­sure and well-be­ing and An fMRI-Based Neu­rologic Sig­na­ture of Phys­i­cal Pain. Luke cov­ers Ber­ridge’s re­search in his post, The Neu­ro­science of Plea­sure. Short ver­sion: ‘lik­ing’, ‘want­ing’, and ‘learn­ing’ are all han­dled by differ­ent sys­tems in the brain. Opi­oids within very small re­gions of the brain seem to in­duce the ‘lik­ing’ re­sponse; el­se­where in the brain, opi­oids only pro­duce ‘want­ing’. We don’t know how or why yet. This sort of re­search con­strains a gen­eral prin­ci­ple, but doesn’t re­ally hint to­ward one.

In short, there’s plenty of re­search around the topic, but it’s fo­cused ex­clu­sively on hu­mans/​mam­mals/​ver­te­brates: our evolved adap­ta­tions, our emo­tional sys­tems, and our ar­chi­tec­tural quirks. Noth­ing on gen­eral or uni­ver­sal prin­ci­ples that would ad­dress any of (1)-(7). There is in­ter­est­ing in­for­ma­tion-the­o­retic /​ pat­ternist work be­ing done, but it’s highly con­cen­trated around con­scious­ness re­search.


Bot­tom line: there seems to be a crit­i­cally im­por­tant gen­eral prin­ci­ple as to what makes cer­tain ar­range­ments of par­ti­cles in­nately prefer­able to oth­ers, and we don’t know what it is. Ex­cit­ing!