I understand that there’s a difference between abstract functions and physical functions.
For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
Makes sense, thanks for engaging with the question.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.
I understand that there’s a difference between abstract functions and physical functions. For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
Makes sense, thanks for engaging with the question.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.