That’s funny, David again and the other David arguing about the hard versus the “soft” problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the “real” problem hasn’t been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn’t get explained away by a “mechanistic” model.
B believes that a (probably reductionist) model suffices, “this configuration of matter/energy can be called ‘conscious’” is not fundamentally different from “this configuration of matter/energy can be called ‘a particle’”. If you’re content with such an explanation for the latter, why not the former? …
However, with many Bs I find that even accepting a matter-of-fact workable definition of “these states correspond to consciousness” is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a “value” for “subjective experience”, we would know only about our own, and probably only ascribe that property to similar ‘things’ (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? “What an algorithm feels like on the inside”—any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It’s not clear why qualia should depend on “only things that can communicate can experience qualia”, for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could—if it did experience it. Occam’s Razor may prefer “everything can experience” to “only very special configurations of matter can experience”, keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la “I can work with that, what more do I want?”. “Here be dragons” is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn’t a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as “Dave—no, not that Dave, the other one.”
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand. Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
In any case, I don’t agree that the B’s studying conscious experience fail to explore further questions. Quite the contrary, they’ve made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don’t explore the particular questions you’re talking about here.
And it’s not clear to me that the A’s exploring those questions are accomplishing anything.
If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?
So, A asks “If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?” How would you reply to A?
My response is something like “We know that certain configurations of physical objects give rise to containment. Sure, it’s not impossible that “unprocessed containment” exists in other systems, and we just haven’t ever noticed it, but why are you even asking that question?”
That’s funny, David again and the other David arguing about the hard versus the “soft” problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the “real” problem hasn’t been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn’t get explained away by a “mechanistic” model.
B believes that a (probably reductionist) model suffices, “this configuration of matter/energy can be called ‘conscious’” is not fundamentally different from “this configuration of matter/energy can be called ‘a particle’”. If you’re content with such an explanation for the latter, why not the former? …
However, with many Bs I find that even accepting a matter-of-fact workable definition of “these states correspond to consciousness” is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a “value” for “subjective experience”, we would know only about our own, and probably only ascribe that property to similar ‘things’ (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? “What an algorithm feels like on the inside”—any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It’s not clear why qualia should depend on “only things that can communicate can experience qualia”, for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could—if it did experience it. Occam’s Razor may prefer “everything can experience” to “only very special configurations of matter can experience”, keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la “I can work with that, what more do I want?”. “Here be dragons” is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn’t a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as “Dave—no, not that Dave, the other one.”
I always assumed that the name was originally to distinguish you from David Gerard.
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
In any case, I don’t agree that the B’s studying conscious experience fail to explore further questions. Quite the contrary, they’ve made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don’t explore the particular questions you’re talking about here.
And it’s not clear to me that the A’s exploring those questions are accomplishing anything.
So, A asks “If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?”
How would you reply to A?
My response is something like “We know that certain configurations of physical objects give rise to containment. Sure, it’s not impossible that “unprocessed containment” exists in other systems, and we just haven’t ever noticed it, but why are you even asking that question?”