A Basic Problem of Ethics: Panpsychism?

Panpsy­chism seems like a plau­si­ble the­ory of con­scious­ness. It raises ex­treme challenges for es­tab­lish­ing rea­son­able eth­i­cal crite­ria.

It seems to sug­gest that our ethics is very sub­jec­tive: the “ex­pand­ing cir­cle” of Peter Singer would even­tu­ally (ideally) stretch to en­com­pass all mat­ter. But how are we to com­mu­ni­cate with, e.g. rocks? Our abil­ity to com­mu­ni­cate with one an­other and our pre­sumed abil­ity to de­tect false­hood and em­pathize in a mean­ingful way al­low us to ig­nore this challenge wrt other peo­ple.

One way to ar­gue that this is not such a prob­lem is to sug­gest that hu­mans are sim­ply very limited in our ca­pac­ity as eth­i­cal be­ings, and that we are fun­da­men­tally limited in our per­cep­tions of eth­i­cal truth to only be able to draw con­clu­sions with any mean­ingful de­gree of cer­tainty about other hu­mans or an­i­mals (or maybe even life-forms, if you are op­ti­mistic).

But this is not very satis­fy­ing if we con­sider tran­shu­man­ism. Are we to rely on AI to ex­trap­o­late our in­tu­itions to the rest of mat­ter? How do we know that our in­tu­itions are cor­rect (or do we even care? I do, per­son­ally...)? How can we tell if an AI is cor­rectly ex­trap­o­lat­ing?