Introduction to Noematology

All things come to ends, and now is the time of end­ing for my se­quence of posts about ex­is­ten­tial phe­nomenol­ogy. In the fi­nal in­stal­l­ment I say:

  • Phenom­e­nal con­scious­ness is nec­es­sary to con­scious­ness be­cause there are no p-zom­bies that are mean­ingfully con­scious. How­ever phe­nom­e­nal con­scious­ness is not suffi­cient to ex­plain all of con­scious­ness be­cause it is em­bod­ied and phe­nom­e­nal con­scious­ness ig­nores those de­tails.

  • Phenom­e­nal con­scious­ness, and in­deed on­tol­ogy, is cre­ated by nested feed­back when the things cre­ated from the in­for­ma­tion of feed­back be­come cy­ber­netic them­selves.

  • Th­ese phe­nom­ena of nested ex­pe­rience are ex­pe­rienced as ob­ject. We iden­tify these reified phe­nom­ena as thoughts or noe­mata.

  • Be­cause all noe­mata have telos as a re­sult of be­ing ex­pe­rienced, all noe­mata we con­sider are ax­ias (val­ues), thus noe­ma­tol­ogy co­in­cides with ax­iol­ogy.

  • We can use this foun­da­tion to ex­plore philo­soph­i­cal ques­tions re­lated to AI al­ign­ment. I give the ex­am­ples of recom­mend­ing moral nihilism as an as­sump­tion within AI al­ign­ment, al­ign­ing non-ra­tio­nal but still ex­is­ten­tially-dan­ger­ous agents, and ad­dress­ing the metaeth­i­cal ques­tions of value learn­ing.

Read the whole thing here.

Next on my agenda: a sec­ond at­tempt at for­mal­iz­ing the state­ment of the al­ign­ment prob­lem.

No comments.