I find it very easy (and sufficient) to believe that “qualia is what it feels like for THIS algorithm on THIS wetware”, with a high level of agnosticism on what other implementations will be like and whether they have internal reflectable experiences or are “just” extremely complex processing engines.
But it obviously isn’t sufficient to do a bunch of things. In the absence of an actual explanation , you aren’t able to solve issues about AI consciousness and animal suffering. Note that “qualia is what it feels like for THIS algorithm on THIS wetware” is a belief , not an explanation—there’s no how or why to it.
Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.
But it obviously isn’t sufficient to do a bunch of things. In the absence of an actual explanation , you aren’t able to solve issues about AI consciousness and animal suffering. Note that “qualia is what it feels like for THIS algorithm on THIS wetware” is a belief , not an explanation—there’s no how or why to it.
Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.