[SEQ RERUN] Nonperson Predicates

To­day’s post, Non­per­son Pred­i­cates was origi­nally pub­lished on 27 De­cem­ber 2008. A sum­mary (taken from the LW wiki):

An AI, try­ing to de­velop highly ac­cu­rate mod­els of the peo­ple it in­ter­acts with, may de­velop mod­els which are con­scious them­selves. For eth­i­cal rea­sons, it would be prefer­able if the AI wasn’t cre­at­ing and de­stroy­ing peo­ple in the course of in­ter­per­sonal in­ter­ac­tions. Re­solv­ing this is­sue re­quires mak­ing some progress on the hard prob­lem of con­scious ex­pe­rience. We need some rule which definitely iden­ti­fies all con­scious minds as con­scious. We can make do if it still iden­ti­fies some non­con­scious minds as con­scious.


Dis­cuss the post here (rather than in the com­ments to the origi­nal post).

This post is part of the Rerun­ning the Se­quences se­ries, where we’ll be go­ing through Eliezer Yud­kowsky’s old posts in or­der so that peo­ple who are in­ter­ested can (re-)read and dis­cuss them. The pre­vi­ous post was Devil’s Offers, and you can use the se­quence_re­runs tag or rss feed to fol­low the rest of the se­ries.

Se­quence re­runs are a com­mu­nity-driven effort. You can par­ti­ci­pate by re-read­ing the se­quence post, dis­cussing it here, post­ing the next day’s se­quence re­runs post, or sum­ma­riz­ing forth­com­ing ar­ti­cles on the wiki. Go here for more de­tails, or to have meta dis­cus­sions about the Rerun­ning the Se­quences se­ries.