I’m not sure this is relevant, but I think it would be clearer if we replaced “consciousness” with “self awareness.”
I’m very unsure whether
having “self awareness” (a model of oneself in a world model)
⟺ having “consciousness” or “internal experience”)
⟺ having “moral value.”
It seems very hard to define what consciousness or internal experience is, yet everyone is talking about it. It’s even possible that there is actually no such thing as consciousness or internal experience, but human cognition evolved to think as if this undefinable attribute existed, because thinking as if it existed led to better conclusions. And evolution only cares whether the brain’s thinking machinery makes adaptive outputs, not whether the concepts it uses to arrive at those outputs make any sense at all.
Whether we flag an object as being “conscious” or having “internal experience” may be evolution’s way of deciding whether or not we should predict the object’s behaviour using the “what would I do if I was it” computation. If the computation helps predict the object, we evolved to see it as conscious. If the computation doesn’t help, we evolved to not see it as conscious, and instead predict its behaviour by modelling its parts and past behaviour.
Just like “good” and “bad” only exists in the map and not the territory, so might “conscious” and “not conscious.” A superintelligent being might not predict human behaviour by asking “what would I do if I was it,” but instead predict us by modelling our parts. In that sense, we are not conscious from its point of view. But that shouldn’t prove we have no moral value.
I like this treatment of consciousness and morality so much better than the typical EA (and elsewhere) naive idea that anything that “has consciousness” suddenly “has moral value” (even worse, and dangerous, is to combine that with symmetric population ethics). We should treat these things carefully (and imo democratically) to avoid making giant mistakes once AI allows us to put ethics into practice.
I’m a bit confused what moral mistakes you feel we might make as a result of conflating moral value with the thing I describe in OP [ whether you call that thing “consciousness” or not ].
Do certain nonhuman entities—animals, possibly-near hypothetical AIs—seem like they would obviously have a subjective experience like your own, to you?
Ah I wasn’t really referring to the OP, more to people who in general might blindly equate vague notions of whatever consciousness might mean to moral value. I think that’s an oversimplification and possibly dangerous. Combined with symmetric population ethics, a result could be that we’d need to push for spamming the universe with maximum happy AIs, and even replacing humanity with maximum happy AIs since they’d contain more happiness per kg or m3. I think that would be madness.
Animals: yes, some. Future AIs: possibly.
If I’d have to speculate, I’d guess that self-awareness is just included in any good world model, and sentience is a control feedback loop, in both humans and AIs. These two things together, perhaps in something like a global workspace, might make up what some people call consciousness. These things are obviously useful to steer machines into a designed direction. But I fear they will turn out to be trivial engineering results: one could argue an automatic vacuum cleaner has feeling, since it has a feedback loop steering it clear of a wall. That doesn’t mean it should have rights.
I think the morality question is a difficult one, will remain subjective, and we should vote on it, rather than try to solve it analytically. I think the latter is doomed.
It seems very hard to define what consciousness or internal experience is, yet everyone is talking about it. It’s even possible that there is actually no such thing as consciousness or internal experience,
There’s been discussion in these comments—more and less serious—about whether it’s plausible Boldface and Quotation are talking past each other on account of neurological differences. I think this is quite plausible even before we get into the question of whether Boldfaces could truly lack the qualia Quotations are talking about, or not.
In the chapter of Consciousness Explained titled “Multiple Drafts Versus the Cartesian Theater”, Dennett sets out to solve the puzzle of how the brain reconstructs conscious experiences of the same distant object to be simultaneous across sensory modes despite the fact that light travels faster [and is processed slower in the brain] than sound. He considers reconstruction of simultaneity across conscious modes part of what a quale is.
Yet, I have qualia if anyone does, and if someone bounces a basketball 30m away from me, I generally don’t hear it bounce until a fraction of a second after I see it hit the ground. I’ve always been that way. As a grade-school kid, I would ask people about it; they’d say it wasn’t that way for them, and I’d assume it was because they weren’t paying close enough attention. Now I know people differ vastly in how they experience things and it’s almost certainly an unusual neurological quirk.
Second example: it’s commonly attributed to William James that he remarked on the slowness of qualia, relative to more atomic perception: “I see a bear, I run, I am afraid”. This basic motto—the idea that emotion succeeds action—went a long way in James’s philosophy of emotion, if I understand correctly.
I don’t experience emotion [or other things that usually get called “qualia”] delayed relative to reflex action like that at all. Other people also don’t usually report that experience, and I suspect James was pretty strange in that respect.
So even if everyone had qualia, yes, when we try to explain the contents of subjective awareness, we are quite frequently talking past each other because of the Typical Mind Fallacy.
[ . . . ]
I think what happens is that people—especially people like me who can sometimes experience deep, but difficult-to-quickly-unpack, holistic percepts with a high subjective level of certitude—use “quale” [or “jñāna” or “gnosis”] to refer to hard-to-describe things we would not know except by directly experiencing them. Some mystics take advantage, and claim you should give money to their church for reasons they can’t describe but are very certain of. So semantic drift associates the word with mysticism. [ Even while some people, like myself and Chalmers, continue using such words honestly, to refer to certain kinds of large high-certitude experiences whose internals frustrate our access. ]
Dennett and other Boldfaces seem [neurologically? I imagine] inclined to approach all situations, especially those involving reflective experiences, with a heavy emotional tone of doubt, which opens up questions and begs resolution.
[ E.g., Dennett writes:
“Descartes claimed to doubt everything that could be doubted, but he never doubted that his conscious experiences had qualia, the properties by which he knew or apprehended them.”
and this general approach holds in the rest of his writing, as far as I’m aware. ]
So given that people use the word “quale” [ which, again, like the word “vista” or “projection”, refers to a category of thing that, if we’re speaking precisely, isn’t real ] so often to refer to their high-certitude opaque experiences, it makes sense that Dennett and Boldfaces would say that they don’t have such experiences. Their experiences don’t come with that gnostic certitude by which holistic thinkers [like me] sometimes fumblingly excuse our quick illegible conclusions.
Does any of that sound like what you are talking about? Or am I not getting at the right thing?
Just like “good” and “bad” only exists in the map and not the territory, so might “conscious” and “not conscious.”
To some extent, if you don’t understand yourself explicitly, if you couldn’t write yourself as a computer program and can only access your own subjective self, the question of ‘what class of entities do you think is conscious’ comes down to ‘what class of entities do you think is Like Yourself in the Important Way’. I think the question of ‘what has moral value’ has some components that are similar.
However, I do think there is a specific ‘self-model’ component where, if we could all see them clearly, and we were all operating in Moral Realist Mode, we would all say, “Oh, yeah, that part, the conscious-awareness part, most things that naturally crop up on Earth that have that have vastly more moral value than most that don’t”.
I believe less ambiguously, on a purely epistemic level, that this self-model component does particular things and is necessary to function in a social world as complex as the human one—at least, for beings not so much vastly more generally intelligent than us that every cognitive task rounds off to “trivial for them for free”.
Edit:
A superintelligent being might not predict human behaviour by asking “what would I do if I was it,” but instead predict us by modelling our parts. In that sense, we are not conscious from its point of view. But that shouldn’t prove we have no moral value.
There’s a difference between modeling something reflectively, and understanding attributes about it. We model computer algorithms we ourselves don’t use all the time. A superintelligence that didn’t share our consciousness or the structure of our sense of empathy could still know we were conscious, as well as other facts about us.
I want to say that I am very uncertain about consciousness and internal experience and how it works. I do think that your position (it depends on some form of an internal model which computes a lot of details about oneself) feels more plausible than other ideas like “Integrated Information Theory.”
What drove me so insane that I utter the words “maybe consciousness and internal experience doesn’t exist,” is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
I can try to imagine the qualia being more “intense/deep” or more “faded,” but that seems so illogical, because the sensation of intensity and fadedness is probably just a quantity used to track how much I should care about an experience. If I took drugs which made everything feel so much more profound (or much more faded), I don’t think I’ll actually be many times more conscious (or less conscious). I’ll just care more (or less) about each experience.
The only way I can imagine consciousness, is to imagine that some things have a consciousness, some things do not have a consciousness, and that things with a consciousness experience the world, in that I could theoretically be them, and feel strange things or see strange things. And that things which do not have a consciousness don’t experience the world, and I couldn’t possibly be them.
If I was forced to think that consciousness wasn’t discrete, but existed on a spectrum (with some things being 50% or 10% as conscious as me), it would be just as counterintuitive as if consciousness didn’t exist at all, and it was just my own mind deciding which objects I should try to predict by imagining if I were them.
trying to imagine being something with half as much consciousness
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)
Hmm. I think, there are two framings of consciousness.
Framing 1, is how aware I am of my situation, and how clear my memories are, and so forth.
Framing 2, is that for beings with a low level of consciousness, there is no inner experience, no qualia, only behavioural mechanisms.
In framing 1, I can imagine being less conscious, or more conscious. But it’s hard to imagine gradually being less and less conscious, until at some point I cease to have inner experience or qualia and I’m just a robot with behavioural mechanisms (running away from things I fear, seeking things which give me reward).
It’s the second framing, which I think might not exist, since imagining it’s on a continuum feels as hard as imagining it doesn’t exist.
What drove me so insane … is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
You could instead conclude that the first principles are wrong, and that consciousness depends on something more than “neurons”, something that is inherently all or nothing.
Consider a knot. The knottedness of a knot is a complex unity. If you cut it anywhere, it’s no longer a knot.
Physics and mathematics contain a number of entities, from topological structures to nonfactorizable algebraic objects, which do not face the “sorites” problem of being on a spectrum.
The idea is not to deny that neurons are causally related to consciousness, but to suggest that the relevant physics is not just about atoms sticking together; that physical entities with this other kind of ontology may be part of it too. That could mean knots in a field, it could mean entangled wavefunctions, it could mean something we haven’t thought of.
But I feel a knot has to be made up of, a mathematically perfect line. If you tied a knot using real world materials like string, I could gradually remove atoms until you were forced to say “okay it seems to be halfway between a knot and not a knot.”
Another confusing property of consciousness, is imagine if you were a computer simulation. The computer simulation is completely deterministic: if the simulation was run twice, the result would be identical.
So now imagine the simulators run you once, and record a 3D video of exactly how all your atoms moved. In this first run, you would be conscious.
But now imagine they run you a second time. Would you still be conscious? I think most people who believe consciousness is real, would say yes you would.
But what if they simply replay the video from first time, without any cause and effect? Then most people would probably say you wouldn’t be conscious, you’re just a recording.
But what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording? Then they could adjust how conscious you were. How much cause and effect was occurring inside you. They could make some of your experiences more conscious. Some of your brain areas more conscious.
What would happen to your experience as certain brain areas were gradually replaced by recordings, and thus less conscious? What would happen to your qualia?
Well, you wouldn’t notice anything. Your experience will feel just the same. Until at some point. Somehow. You cease to have any experience, and you’re just a recording.
You would never feel the “consciousness juice” draining out of you. You will never think that your qualia is fading.
Instead, you will have a strong false conviction that you still have a lot of qualia, when in reality there is very little left.
But what if, all qualia was such a false conviction in the first place?
What if the quantity of experience/consciousness was subjective, and only existed in the map, not the territory?
Maybe there’s no such thing as qualia, but there is still such thing as happiness and sadness and wonder. Humans feel these things. Animals feel something analogous to it. There is no objective way to quantify the number of souls or amount of experience. But we have the instinct to care about other things, and so we do. We don’t care about insects more than fellow humans because they are greater in number and “contain most of the experience,” but we still care about them a tiny bit.
But I feel a knot has to be made up of, a mathematically perfect line
There are various ways you can get a knot at a fundamental level. It can be a knot in field lines, it can be a knot in a “topological defect”, it can be a knot in a fundamental string.
what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording?
I don’t know if you’ve heard of it, but there is an algorithm for the cellular automaton “Game of Life”, called Hashlife, which functions like that. Hashlife remembers the state transitions arising from particular overall patterns, and when a Game of Life history is being computed, if a known pattern occurs, it just substitutes the memorized history rather than re-computing it from first principles.
So I take you to be asking, what are the implications for simulated beings, if Hashlife-style techniques are used to save costs in the simulation?
When thinking about simulated beings, the first thing to remember, if we are sticking to anything like a natural-scientific ontology, is that the ultimate truth about everything resides in the physics of the base reality. Everything that your computer does, consists of electrons shuttling among transistors. If there are simulated beings in your computer, that’s what they are made of.
If you’re simulating the Game of Life… at the level of software, at the level of some virtual machine, some cells may be updated one fundamental timestep at a time, by applying the basic dynamical rule; other cells may be updated en masse and in a timeskip, by retrieving a Hashlife memory. But either way, the fundamental causality is still electrons in transistors being pulled to and fro by electromagnetic forces.
Describing what happens inside a computer as 0s and 1s changing, or as calculation, or as simulation, is already an interpretive act that goes beyond the physics of what is happening there. It is the same thing as the fact that objectively, the letters on a printed page are just a bunch of shapes. But human readers have learned to interpret those shapes as propositions, descriptions, even as windows into other possible worlds.
If we look for objective facts about a computer that connect us to these intersubjective interpretations, there is the idea of virtual state machines. We divide up the distinct possible microphysical states, e.g. distributions of electrons within a transistor, and we say some distributions correspond to a “0” state, some correspond to a “1″ state, and others fall outside the range of meaningful states. We can define several tiers of abstraction in this way, and thereby attribute all kinds of intricate semantics to what’s going on in the computer. But from a strictly physical perspective, the only part of those meanings that’s actually there, are the causal relations. State x does cause state y, but state x is not intrinsically about the buildup of moisture in a virtual atmosphere, and state y is not intrinsically about rainfall. What is physically there, is a reconfigurable computational system designed to imitate the causality of whatever it’s simulating.
All of this is Scientific Philosophy of Mind 101. And because of modern neuroscience, people think they know that the human brain is just another form of the same thing, a physical system that contains a stack of virtual state machines; and they try to reason from that, to conclusions about the nature of consciousness. For example, that qualia must correspond to particular virtual states, and so that a simulation of a person can also be conscious, so long as the simulation is achieved by inducing the right virtual states in the simulator.
But—if I may simply jump to my own alternative philosophy—I propose that everything to do with consciousness, such as the qualia, depends directly on objective, exact, “microphysical” properties—which can include holistic properties like the topology of a fundamental knot, or the internal structure of an entangled quantum state. Mentally, psychologically, cognitively, the virtual states in a brain or a computer only tell us about things happening outside of its consciousness, like unconscious information processing.
This suggests a different kind of criterion for how much, and what kind of, consciousness there is in a simulation. For example, if we suppose that some form of entanglement is the physical touchstone of consciousness… then you may be simulating a person, but if your computer in base reality isn’t using entanglement to do so, then there’s no consciousness there at all.
Under this paradigm, it may still be possible e.g. to materialize a state of consciousness complete with the false impression that it had already been existing for a while. (Although it’s interesting that quantum informational states are subject to a wide variety of constraints on their production, e.g. the no-cloning theorem, or the need for “magic states” to run faster than a classical computer.) So there may still be epistemically disturbing possibilities that we’d have to come to terms with. But a theory of this nature at least assigns a robust reality to the existence of consciousness, qualia, and so forth.
I would not object to a theory of consciousness based solely on virtual states, that was equally robust. It’s just that virtual states, when you look at them from a microphysical perspective, always seem to have some fuzziness at the edges. Consider the computational interpretation of states of a transistor that I mentioned earlier. It’s a “0” if the electrons are all on one side, it’s a “1“ if they’re all on the other side, and it’s a meaningless state if it’s neither of those. But the problem is that the boundary between computational states isn’t physically absolute. If you have stray electrons floating around, there isn’t some threshold where it sharply and objectively stops being a “0” state, it’s just that the more loose electrons you have, the greater the risk that the transistor will fail to perform the causal role required for it to accurately represent a “0” in the dance of the logic gates.
This physical non-objectivity of computational states is my version of the problems that were giving you headaches in the earlier comment. Fortunately, I know there’s more to physics than Newtonian billard balls rebounding from each other, and that leads to some possibilities for a genuinely holistic ontology of mind.
None of that is obvious. It’s obvious that you would make the same reports, and that is all. If consciousness depends on real causality, or unsimulated physics, then it could fade out, *and you could notice that” , in the way you can notice drowsiness.
Whenever I notice something, there are probably some neurons in me doing this noticing activity, but in this thought experiment every neuron outputs the exact same signals.
I wouldn’t just make the same reports to other people, but each of my brain areas (or individual neurons) will make the same report to every other brain area.
I’m not sure this is relevant, but I think it would be clearer if we replaced “consciousness” with “self awareness.”
I’m very unsure whether
having “self awareness” (a model of oneself in a world model)
⟺ having “consciousness” or “internal experience”)
⟺ having “moral value.”
It seems very hard to define what consciousness or internal experience is, yet everyone is talking about it. It’s even possible that there is actually no such thing as consciousness or internal experience, but human cognition evolved to think as if this undefinable attribute existed, because thinking as if it existed led to better conclusions. And evolution only cares whether the brain’s thinking machinery makes adaptive outputs, not whether the concepts it uses to arrive at those outputs make any sense at all.
Whether we flag an object as being “conscious” or having “internal experience” may be evolution’s way of deciding whether or not we should predict the object’s behaviour using the “what would I do if I was it” computation. If the computation helps predict the object, we evolved to see it as conscious. If the computation doesn’t help, we evolved to not see it as conscious, and instead predict its behaviour by modelling its parts and past behaviour.
Just like “good” and “bad” only exists in the map and not the territory, so might “conscious” and “not conscious.” A superintelligent being might not predict human behaviour by asking “what would I do if I was it,” but instead predict us by modelling our parts. In that sense, we are not conscious from its point of view. But that shouldn’t prove we have no moral value.
I feel that animals have moral value, but whether they are conscious may be sorta subjective.
I like this treatment of consciousness and morality so much better than the typical EA (and elsewhere) naive idea that anything that “has consciousness” suddenly “has moral value” (even worse, and dangerous, is to combine that with symmetric population ethics). We should treat these things carefully (and imo democratically) to avoid making giant mistakes once AI allows us to put ethics into practice.
I’m a bit confused what moral mistakes you feel we might make as a result of conflating moral value with the thing I describe in OP [ whether you call that thing “consciousness” or not ].
Do certain nonhuman entities—animals, possibly-near hypothetical AIs—seem like they would obviously have a subjective experience like your own, to you?
Ah I wasn’t really referring to the OP, more to people who in general might blindly equate vague notions of whatever consciousness might mean to moral value. I think that’s an oversimplification and possibly dangerous. Combined with symmetric population ethics, a result could be that we’d need to push for spamming the universe with maximum happy AIs, and even replacing humanity with maximum happy AIs since they’d contain more happiness per kg or m3. I think that would be madness.
Animals: yes, some. Future AIs: possibly.
If I’d have to speculate, I’d guess that self-awareness is just included in any good world model, and sentience is a control feedback loop, in both humans and AIs. These two things together, perhaps in something like a global workspace, might make up what some people call consciousness. These things are obviously useful to steer machines into a designed direction. But I fear they will turn out to be trivial engineering results: one could argue an automatic vacuum cleaner has feeling, since it has a feedback loop steering it clear of a wall. That doesn’t mean it should have rights.
I think the morality question is a difficult one, will remain subjective, and we should vote on it, rather than try to solve it analytically. I think the latter is doomed.
!! A Boldface?
See my earlier comment on Rafael Harth’s Why it’s so hard to talk about Consciousness.
Relevant excerpts:
[ . . . ]
Does any of that sound like what you are talking about? Or am I not getting at the right thing?
To some extent, if you don’t understand yourself explicitly, if you couldn’t write yourself as a computer program and can only access your own subjective self, the question of ‘what class of entities do you think is conscious’ comes down to ‘what class of entities do you think is Like Yourself in the Important Way’. I think the question of ‘what has moral value’ has some components that are similar.
However, I do think there is a specific ‘self-model’ component where, if we could all see them clearly, and we were all operating in Moral Realist Mode, we would all say, “Oh, yeah, that part, the conscious-awareness part, most things that naturally crop up on Earth that have that have vastly more moral value than most that don’t”.
I believe less ambiguously, on a purely epistemic level, that this self-model component does particular things and is necessary to function in a social world as complex as the human one—at least, for beings not so much vastly more generally intelligent than us that every cognitive task rounds off to “trivial for them for free”.
Edit:
There’s a difference between modeling something reflectively, and understanding attributes about it. We model computer algorithms we ourselves don’t use all the time. A superintelligence that didn’t share our consciousness or the structure of our sense of empathy could still know we were conscious, as well as other facts about us.
I want to say that I am very uncertain about consciousness and internal experience and how it works. I do think that your position (it depends on some form of an internal model which computes a lot of details about oneself) feels more plausible than other ideas like “Integrated Information Theory.”
What drove me so insane that I utter the words “maybe consciousness and internal experience doesn’t exist,” is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
I can try to imagine the qualia being more “intense/deep” or more “faded,” but that seems so illogical, because the sensation of intensity and fadedness is probably just a quantity used to track how much I should care about an experience. If I took drugs which made everything feel so much more profound (or much more faded), I don’t think I’ll actually be many times more conscious (or less conscious). I’ll just care more (or less) about each experience.
The only way I can imagine consciousness, is to imagine that some things have a consciousness, some things do not have a consciousness, and that things with a consciousness experience the world, in that I could theoretically be them, and feel strange things or see strange things. And that things which do not have a consciousness don’t experience the world, and I couldn’t possibly be them.
If I was forced to think that consciousness wasn’t discrete, but existed on a spectrum (with some things being 50% or 10% as conscious as me), it would be just as counterintuitive as if consciousness didn’t exist at all, and it was just my own mind deciding which objects I should try to predict by imagining if I were them.
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)
Hmm. I think, there are two framings of consciousness.
Framing 1, is how aware I am of my situation, and how clear my memories are, and so forth.
Framing 2, is that for beings with a low level of consciousness, there is no inner experience, no qualia, only behavioural mechanisms.
In framing 1, I can imagine being less conscious, or more conscious. But it’s hard to imagine gradually being less and less conscious, until at some point I cease to have inner experience or qualia and I’m just a robot with behavioural mechanisms (running away from things I fear, seeking things which give me reward).
It’s the second framing, which I think might not exist, since imagining it’s on a continuum feels as hard as imagining it doesn’t exist.
To me, it doesn’t even need to be imagined. Everyone experienced partial consciousness, e.g.
Dreaming, where you have phenomenal awareness , but not of an external world.
Deliberate visualisation, which is less phenomenally vivid than perception in most people.
Drowsiness, states between sleep.and waking.
Autopilot and flow states , where the sense of a self deciding actions isn absent.
More rarely there are forms of heightened consciousness: peak experiences, meditations jñanas, psychedelic enhanced perceptions , etc.
You could instead conclude that the first principles are wrong, and that consciousness depends on something more than “neurons”, something that is inherently all or nothing.
Consider a knot. The knottedness of a knot is a complex unity. If you cut it anywhere, it’s no longer a knot.
Physics and mathematics contain a number of entities, from topological structures to nonfactorizable algebraic objects, which do not face the “sorites” problem of being on a spectrum.
The idea is not to deny that neurons are causally related to consciousness, but to suggest that the relevant physics is not just about atoms sticking together; that physical entities with this other kind of ontology may be part of it too. That could mean knots in a field, it could mean entangled wavefunctions, it could mean something we haven’t thought of.
But I feel a knot has to be made up of, a mathematically perfect line. If you tied a knot using real world materials like string, I could gradually remove atoms until you were forced to say “okay it seems to be halfway between a knot and not a knot.”
Another confusing property of consciousness, is imagine if you were a computer simulation. The computer simulation is completely deterministic: if the simulation was run twice, the result would be identical.
So now imagine the simulators run you once, and record a 3D video of exactly how all your atoms moved. In this first run, you would be conscious.
But now imagine they run you a second time. Would you still be conscious? I think most people who believe consciousness is real, would say yes you would.
But what if they simply replay the video from first time, without any cause and effect? Then most people would probably say you wouldn’t be conscious, you’re just a recording.
But what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording? Then they could adjust how conscious you were. How much cause and effect was occurring inside you. They could make some of your experiences more conscious. Some of your brain areas more conscious.
What would happen to your experience as certain brain areas were gradually replaced by recordings, and thus less conscious? What would happen to your qualia?
Well, you wouldn’t notice anything. Your experience will feel just the same. Until at some point. Somehow. You cease to have any experience, and you’re just a recording.
You would never feel the “consciousness juice” draining out of you. You will never think that your qualia is fading.
Instead, you will have a strong false conviction that you still have a lot of qualia, when in reality there is very little left.
But what if, all qualia was such a false conviction in the first place?
What if the quantity of experience/consciousness was subjective, and only existed in the map, not the territory?
Maybe there’s no such thing as qualia, but there is still such thing as happiness and sadness and wonder. Humans feel these things. Animals feel something analogous to it. There is no objective way to quantify the number of souls or amount of experience. But we have the instinct to care about other things, and so we do. We don’t care about insects more than fellow humans because they are greater in number and “contain most of the experience,” but we still care about them a tiny bit.
There are various ways you can get a knot at a fundamental level. It can be a knot in field lines, it can be a knot in a “topological defect”, it can be a knot in a fundamental string.
I don’t know if you’ve heard of it, but there is an algorithm for the cellular automaton “Game of Life”, called Hashlife, which functions like that. Hashlife remembers the state transitions arising from particular overall patterns, and when a Game of Life history is being computed, if a known pattern occurs, it just substitutes the memorized history rather than re-computing it from first principles.
So I take you to be asking, what are the implications for simulated beings, if Hashlife-style techniques are used to save costs in the simulation?
When thinking about simulated beings, the first thing to remember, if we are sticking to anything like a natural-scientific ontology, is that the ultimate truth about everything resides in the physics of the base reality. Everything that your computer does, consists of electrons shuttling among transistors. If there are simulated beings in your computer, that’s what they are made of.
If you’re simulating the Game of Life… at the level of software, at the level of some virtual machine, some cells may be updated one fundamental timestep at a time, by applying the basic dynamical rule; other cells may be updated en masse and in a timeskip, by retrieving a Hashlife memory. But either way, the fundamental causality is still electrons in transistors being pulled to and fro by electromagnetic forces.
Describing what happens inside a computer as 0s and 1s changing, or as calculation, or as simulation, is already an interpretive act that goes beyond the physics of what is happening there. It is the same thing as the fact that objectively, the letters on a printed page are just a bunch of shapes. But human readers have learned to interpret those shapes as propositions, descriptions, even as windows into other possible worlds.
If we look for objective facts about a computer that connect us to these intersubjective interpretations, there is the idea of virtual state machines. We divide up the distinct possible microphysical states, e.g. distributions of electrons within a transistor, and we say some distributions correspond to a “0” state, some correspond to a “1″ state, and others fall outside the range of meaningful states. We can define several tiers of abstraction in this way, and thereby attribute all kinds of intricate semantics to what’s going on in the computer. But from a strictly physical perspective, the only part of those meanings that’s actually there, are the causal relations. State x does cause state y, but state x is not intrinsically about the buildup of moisture in a virtual atmosphere, and state y is not intrinsically about rainfall. What is physically there, is a reconfigurable computational system designed to imitate the causality of whatever it’s simulating.
All of this is Scientific Philosophy of Mind 101. And because of modern neuroscience, people think they know that the human brain is just another form of the same thing, a physical system that contains a stack of virtual state machines; and they try to reason from that, to conclusions about the nature of consciousness. For example, that qualia must correspond to particular virtual states, and so that a simulation of a person can also be conscious, so long as the simulation is achieved by inducing the right virtual states in the simulator.
But—if I may simply jump to my own alternative philosophy—I propose that everything to do with consciousness, such as the qualia, depends directly on objective, exact, “microphysical” properties—which can include holistic properties like the topology of a fundamental knot, or the internal structure of an entangled quantum state. Mentally, psychologically, cognitively, the virtual states in a brain or a computer only tell us about things happening outside of its consciousness, like unconscious information processing.
This suggests a different kind of criterion for how much, and what kind of, consciousness there is in a simulation. For example, if we suppose that some form of entanglement is the physical touchstone of consciousness… then you may be simulating a person, but if your computer in base reality isn’t using entanglement to do so, then there’s no consciousness there at all.
Under this paradigm, it may still be possible e.g. to materialize a state of consciousness complete with the false impression that it had already been existing for a while. (Although it’s interesting that quantum informational states are subject to a wide variety of constraints on their production, e.g. the no-cloning theorem, or the need for “magic states” to run faster than a classical computer.) So there may still be epistemically disturbing possibilities that we’d have to come to terms with. But a theory of this nature at least assigns a robust reality to the existence of consciousness, qualia, and so forth.
I would not object to a theory of consciousness based solely on virtual states, that was equally robust. It’s just that virtual states, when you look at them from a microphysical perspective, always seem to have some fuzziness at the edges. Consider the computational interpretation of states of a transistor that I mentioned earlier. It’s a “0” if the electrons are all on one side, it’s a “1“ if they’re all on the other side, and it’s a meaningless state if it’s neither of those. But the problem is that the boundary between computational states isn’t physically absolute. If you have stray electrons floating around, there isn’t some threshold where it sharply and objectively stops being a “0” state, it’s just that the more loose electrons you have, the greater the risk that the transistor will fail to perform the causal role required for it to accurately represent a “0” in the dance of the logic gates.
This physical non-objectivity of computational states is my version of the problems that were giving you headaches in the earlier comment. Fortunately, I know there’s more to physics than Newtonian billard balls rebounding from each other, and that leads to some possibilities for a genuinely holistic ontology of mind.
None of that is obvious. It’s obvious that you would make the same reports, and that is all. If consciousness depends on real causality, or unsimulated physics, then it could fade out, *and you could notice that” , in the way you can notice drowsiness.
It’s not obvious but I think it’s probably true.
Whenever I notice something, there are probably some neurons in me doing this noticing activity, but in this thought experiment every neuron outputs the exact same signals.
I wouldn’t just make the same reports to other people, but each of my brain areas (or individual neurons) will make the same report to every other brain area.