“How an algorithm feels from inside” discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.
Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we’ll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn’t conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative.
So now, when designing an AI that will learn and grow and behave in accordance with human values, how important is qualia for it to function along those lines? Can an unconscious optimizing algorithm be robust enough to act morally and shape a positive future for humanity? Will an unconscious optimizing algorithm, without the same subjectivity that we take for granted, be able to scale up in intelligence to the level we see in biological organisms, let alone humans and beyond, or is subjective experience necessary for the level of intelligence we have? If possible, will an optimizing algorithm actually become conscious and experience qualia after a certain threshold, and how does that affect its continued growth?
On a side note, my hypothetical friendly AGI project that would directly guarantee success without wondering about speculations on the limits of computation, qualia, or how to safely encode meta-ethics in a recursively optimizing algorithm, would be to just grow a brain in a vat as it were, maybe just neural tissue cultures on biochips with massive interconnects coupled to either a software or hardware embodiment, and design its architecture so that its metacognitive processes are hardwired for compassion and empathy. A bodhisattva in a box. Yes, I’m aware of all the fear-mongering regarding anthropomorphized AIs, but I’m willing to argue that the possibility space of potential minds, at least the ones we have access to create from our place in history, is greatly constricted and that this route may be the best, and possibly, the only way forward.
“How an algorithm feels from inside” discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.
Um, OK. What types of physically realizable systems can have qualia? Evidently I’m unclear on the concept.
That is the $64,000 question.
It’s not yet clear to me that we’re talking about anything that’s anything. I suppose I’m asking for something that does make that a bit clearer.
Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we’ll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn’t conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative.
So now, when designing an AI that will learn and grow and behave in accordance with human values, how important is qualia for it to function along those lines? Can an unconscious optimizing algorithm be robust enough to act morally and shape a positive future for humanity? Will an unconscious optimizing algorithm, without the same subjectivity that we take for granted, be able to scale up in intelligence to the level we see in biological organisms, let alone humans and beyond, or is subjective experience necessary for the level of intelligence we have? If possible, will an optimizing algorithm actually become conscious and experience qualia after a certain threshold, and how does that affect its continued growth?
On a side note, my hypothetical friendly AGI project that would directly guarantee success without wondering about speculations on the limits of computation, qualia, or how to safely encode meta-ethics in a recursively optimizing algorithm, would be to just grow a brain in a vat as it were, maybe just neural tissue cultures on biochips with massive interconnects coupled to either a software or hardware embodiment, and design its architecture so that its metacognitive processes are hardwired for compassion and empathy. A bodhisattva in a box. Yes, I’m aware of all the fear-mongering regarding anthropomorphized AIs, but I’m willing to argue that the possibility space of potential minds, at least the ones we have access to create from our place in history, is greatly constricted and that this route may be the best, and possibly, the only way forward.