Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences.
Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I’ll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.
What if you were to assume I am philosophical zombie?
That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.
Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I’ll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.
That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.
Is this what you had in mind?
It’s a good start, thankyou!