I agree with you that if my experience of red can’t be constructed of matter, then my understanding of a sentence also can’t be. And I agree with you that we don’t have a reliable account of how to construct such things out of matter, and without such an account we can’t rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.
I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of “sentientism” you are talking about. That said, I think that’s a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.
I don’t equate the experience of red with the tendency to output the word “red” when queried, both in the sense that it’s easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it’s easy for me to imagine a system that outputs the word “red” when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.
I don’t equate the experience of red with categorization… it is easy to imagine categorization without experience. It’s harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn’t sufficient, for experience.
Like you, I can’t come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn’t easy for me to see what one can and can’t make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there’s a problem, but it isn’t obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I’m wasting your time. (I did warn you in the grandparent comment that you won’t find my answer interesting.)
If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure… well, at that point, I’m pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it’s not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.
At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I’d be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn’t talk about the mind as something “nonphysical”. That’s why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.
So I’m actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it’s got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally—this—moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don’t believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I’d favor property dualism rather than the reverse monism I just advocated. But I’m going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.
If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too—its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of “being” but not as islands of “experiencing” (at least, not as islands of reflective experiencing).
I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation—so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction—it’s something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it’s independent it’s no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex—with waking states of consciousness being extremely complex examples of these—and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.
An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn’t borne out, I’ll probably fall back on some form of property dualism, in which there’s a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I’m very far from giving up on the monistic quantum theory of mind.
So, getting back to my original question about what your alternate ontology has to offer…
If I’m understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn’t follow to do with quantum entanglement.
Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be… that is, what benefits there are to be gained from channeling my interest as you recommend.
Put another way: let us suppose you’re right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.
Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won’t show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.
I’m not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.
Thus far, I’ve gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I’m wondering what else.
Because if it’s just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
I don’t consider this inability to merely be posited. It’s a matter of understanding what you can and can’t do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That’s basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I’m saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with—not just correlated with, but identified with—consciousness and its elements. And color offers the clearest and bluntest proof of this.
We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar.
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements.
What are we trying to explain, ultimately? What even gives us something to be explained? It’s conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the “world of appearance”. That might permit us to regard the “external world” as explained by our physics. But then we have this thing, “world of appearance”, where all the problems remain, and which we are nonetheless trying to assimilate to physics (via neuroscience). However, we know (if we care to think things through), that this assimilation is not possible with the current physical ontology.
So the claim that we can describe the behavior of things is not quite as powerful as it seems, because it turns out that the things we are describing can’t actually be the “things” of direct experience, the appearances themselves. We can get isomorphism here, but not identity. It’s an ontological problem: the things of physical theory need to be reconceived so that some of them can be identified with the things of consciousness, the appearances.
I understand that you aren’t “merely” positing the inability of a set of particles, positions and energy-states to be an experience.
I am.
I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I’ve said several times now, repeatedly belaboring that point isn’t going to progress this discussion further.
A few thoughts in response:
I agree with you that if my experience of red can’t be constructed of matter, then my understanding of a sentence also can’t be. And I agree with you that we don’t have a reliable account of how to construct such things out of matter, and without such an account we can’t rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.
I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of “sentientism” you are talking about. That said, I think that’s a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.
I don’t equate the experience of red with the tendency to output the word “red” when queried, both in the sense that it’s easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it’s easy for me to imagine a system that outputs the word “red” when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.
I don’t equate the experience of red with categorization… it is easy to imagine categorization without experience. It’s harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn’t sufficient, for experience.
Like you, I can’t come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn’t easy for me to see what one can and can’t make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there’s a problem, but it isn’t obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I’m wasting your time. (I did warn you in the grandparent comment that you won’t find my answer interesting.)
If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure… well, at that point, I’m pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it’s not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.
At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I’d be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn’t talk about the mind as something “nonphysical”. That’s why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.
So I’m actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it’s got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally—this—moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don’t believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I’d favor property dualism rather than the reverse monism I just advocated. But I’m going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.
If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too—its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of “being” but not as islands of “experiencing” (at least, not as islands of reflective experiencing).
I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation—so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction—it’s something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it’s independent it’s no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex—with waking states of consciousness being extremely complex examples of these—and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.
An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn’t borne out, I’ll probably fall back on some form of property dualism, in which there’s a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I’m very far from giving up on the monistic quantum theory of mind.
So, getting back to my original question about what your alternate ontology has to offer…
If I’m understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn’t follow to do with quantum entanglement.
Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.
Anything else?
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be… that is, what benefits there are to be gained from channeling my interest as you recommend.
Put another way: let us suppose you’re right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.
Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won’t show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.
I’m not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.
Thus far, I’ve gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I’m wondering what else.
Because if it’s just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.
I don’t consider this inability to merely be posited. It’s a matter of understanding what you can and can’t do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That’s basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I’m saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with—not just correlated with, but identified with—consciousness and its elements. And color offers the clearest and bluntest proof of this.
We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar.
What are we trying to explain, ultimately? What even gives us something to be explained? It’s conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the “world of appearance”. That might permit us to regard the “external world” as explained by our physics. But then we have this thing, “world of appearance”, where all the problems remain, and which we are nonetheless trying to assimilate to physics (via neuroscience). However, we know (if we care to think things through), that this assimilation is not possible with the current physical ontology.
So the claim that we can describe the behavior of things is not quite as powerful as it seems, because it turns out that the things we are describing can’t actually be the “things” of direct experience, the appearances themselves. We can get isomorphism here, but not identity. It’s an ontological problem: the things of physical theory need to be reconceived so that some of them can be identified with the things of consciousness, the appearances.
I understand that you aren’t “merely” positing the inability of a set of particles, positions and energy-states to be an experience.
I am.
I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I’ve said several times now, repeatedly belaboring that point isn’t going to progress this discussion further.