Eliezer suggested that, in order to avoid acting unethically, we should refrain from casually dismissing the possibility that other entities are sentient. I responded that I think that’s a very good idea and we should actually implement it. Implementing that idea means questioning assumptions that entities aren’t sentient. One tool for questioning assumptions is asking “What do you think you know, and why do you think you know it?” Or, in less binary terms, why do you assign things the probabilities that you do?
Now do you see the relevance of asking you why you believe what you do as strongly as you do, however strongly that is?
I’m not trying to “win the debate”, whatever that would entail.
Tell you what though, let me offer you a trade: If you answer my question, then I will do my best to answer a question of yours in return. Sound fair?
“Is a human mind the simplest possible mind that can be sentient?” Of course not. Plenty of creatures with simpler minds are plainly sentient. If a tiger suddenly leaps out at you, you don’t operate on the assumption that the tiger lacks awareness; you assume that the tiger is aware of you. Nor do you think “This tiger may behave as if it has subjective experiences, but that doesn’t mean that it actually possesses internal mental states meaningfully analogous to wwhhaaaa CRUNCH CRUNCH GULP.” To borrow from one of your own earlier arguments.
If you are instead sitting comfortably in front of a keyboard and monitor with no tiger in front of you, it’s easy to come up with lots of specious arguments that tigers aren’t really conscious, but so what? It’s also easy to come up with lots of specious arguments that other humans aren’t really conscious. Using such arguments as a basis for actual ethical decision-making strikes me as a bad idea, to put it mildly. What you’ve written here seems disturbingly similar to a solipsist considering the possibility that he could, conceivably, produce an imaginary entity sophisticated enough to qualify as having a mind of its own. Technically, it’s sort of making progress, but....
When I first read your early writing, the one thing that threw me was an assertion that “Animals are the moral equivalent of rocks.” At least, I hope that I’m not falsely attributing that to you; I can’t track down the source, so I apologize if I’m making a mistake. But my recollection is of its standing out from your otherwise highly persuasive arguments as such blatant unsupported personal prejudice. No was evidence given in favor of this idea and it was followed by a parenthetical that clearly indicated that it was just wishful thinking; it really only made any sense in light of a different assertion that spotting glaring holes in other people’s arguments isn’t really indicative of any sort of exceptional competence except when dealing with politically and morally neutral subject matter.
Your post and comments here seem to conflate, under the label of “personhood,” having moral worth and having a mind somehow closely approximating that of an adult human being. Equating these seems phenomenally morally dubious for any number of reasons; it’s hard to see how it doesn’t go directly against bedrock fairness, for example.