I want to say that I am very uncertain about consciousness and internal experience and how it works. I do think that your position (it depends on some form of an internal model which computes a lot of details about oneself) feels more plausible than other ideas like “Integrated Information Theory.”
What drove me so insane that I utter the words “maybe consciousness and internal experience doesn’t exist,” is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
I can try to imagine the qualia being more “intense/deep” or more “faded,” but that seems so illogical, because the sensation of intensity and fadedness is probably just a quantity used to track how much I should care about an experience. If I took drugs which made everything feel so much more profound (or much more faded), I don’t think I’ll actually be many times more conscious (or less conscious). I’ll just care more (or less) about each experience.
The only way I can imagine consciousness, is to imagine that some things have a consciousness, some things do not have a consciousness, and that things with a consciousness experience the world, in that I could theoretically be them, and feel strange things or see strange things. And that things which do not have a consciousness don’t experience the world, and I couldn’t possibly be them.
If I was forced to think that consciousness wasn’t discrete, but existed on a spectrum (with some things being 50% or 10% as conscious as me), it would be just as counterintuitive as if consciousness didn’t exist at all, and it was just my own mind deciding which objects I should try to predict by imagining if I were them.
trying to imagine being something with half as much consciousness
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)
Hmm. I think, there are two framings of consciousness.
Framing 1, is how aware I am of my situation, and how clear my memories are, and so forth.
Framing 2, is that for beings with a low level of consciousness, there is no inner experience, no qualia, only behavioural mechanisms.
In framing 1, I can imagine being less conscious, or more conscious. But it’s hard to imagine gradually being less and less conscious, until at some point I cease to have inner experience or qualia and I’m just a robot with behavioural mechanisms (running away from things I fear, seeking things which give me reward).
It’s the second framing, which I think might not exist, since imagining it’s on a continuum feels as hard as imagining it doesn’t exist.
What drove me so insane … is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
You could instead conclude that the first principles are wrong, and that consciousness depends on something more than “neurons”, something that is inherently all or nothing.
Consider a knot. The knottedness of a knot is a complex unity. If you cut it anywhere, it’s no longer a knot.
Physics and mathematics contain a number of entities, from topological structures to nonfactorizable algebraic objects, which do not face the “sorites” problem of being on a spectrum.
The idea is not to deny that neurons are causally related to consciousness, but to suggest that the relevant physics is not just about atoms sticking together; that physical entities with this other kind of ontology may be part of it too. That could mean knots in a field, it could mean entangled wavefunctions, it could mean something we haven’t thought of.
But I feel a knot has to be made up of, a mathematically perfect line. If you tied a knot using real world materials like string, I could gradually remove atoms until you were forced to say “okay it seems to be halfway between a knot and not a knot.”
Another confusing property of consciousness, is imagine if you were a computer simulation. The computer simulation is completely deterministic: if the simulation was run twice, the result would be identical.
So now imagine the simulators run you once, and record a 3D video of exactly how all your atoms moved. In this first run, you would be conscious.
But now imagine they run you a second time. Would you still be conscious? I think most people who believe consciousness is real, would say yes you would.
But what if they simply replay the video from first time, without any cause and effect? Then most people would probably say you wouldn’t be conscious, you’re just a recording.
But what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording? Then they could adjust how conscious you were. How much cause and effect was occurring inside you. They could make some of your experiences more conscious. Some of your brain areas more conscious.
What would happen to your experience as certain brain areas were gradually replaced by recordings, and thus less conscious? What would happen to your qualia?
Well, you wouldn’t notice anything. Your experience will feel just the same. Until at some point. Somehow. You cease to have any experience, and you’re just a recording.
You would never feel the “consciousness juice” draining out of you. You will never think that your qualia is fading.
Instead, you will have a strong false conviction that you still have a lot of qualia, when in reality there is very little left.
But what if, all qualia was such a false conviction in the first place?
What if the quantity of experience/consciousness was subjective, and only existed in the map, not the territory?
Maybe there’s no such thing as qualia, but there is still such thing as happiness and sadness and wonder. Humans feel these things. Animals feel something analogous to it. There is no objective way to quantify the number of souls or amount of experience. But we have the instinct to care about other things, and so we do. We don’t care about insects more than fellow humans because they are greater in number and “contain most of the experience,” but we still care about them a tiny bit.
But I feel a knot has to be made up of, a mathematically perfect line
There are various ways you can get a knot at a fundamental level. It can be a knot in field lines, it can be a knot in a “topological defect”, it can be a knot in a fundamental string.
what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording?
I don’t know if you’ve heard of it, but there is an algorithm for the cellular automaton “Game of Life”, called Hashlife, which functions like that. Hashlife remembers the state transitions arising from particular overall patterns, and when a Game of Life history is being computed, if a known pattern occurs, it just substitutes the memorized history rather than re-computing it from first principles.
So I take you to be asking, what are the implications for simulated beings, if Hashlife-style techniques are used to save costs in the simulation?
When thinking about simulated beings, the first thing to remember, if we are sticking to anything like a natural-scientific ontology, is that the ultimate truth about everything resides in the physics of the base reality. Everything that your computer does, consists of electrons shuttling among transistors. If there are simulated beings in your computer, that’s what they are made of.
If you’re simulating the Game of Life… at the level of software, at the level of some virtual machine, some cells may be updated one fundamental timestep at a time, by applying the basic dynamical rule; other cells may be updated en masse and in a timeskip, by retrieving a Hashlife memory. But either way, the fundamental causality is still electrons in transistors being pulled to and fro by electromagnetic forces.
Describing what happens inside a computer as 0s and 1s changing, or as calculation, or as simulation, is already an interpretive act that goes beyond the physics of what is happening there. It is the same thing as the fact that objectively, the letters on a printed page are just a bunch of shapes. But human readers have learned to interpret those shapes as propositions, descriptions, even as windows into other possible worlds.
If we look for objective facts about a computer that connect us to these intersubjective interpretations, there is the idea of virtual state machines. We divide up the distinct possible microphysical states, e.g. distributions of electrons within a transistor, and we say some distributions correspond to a “0” state, some correspond to a “1″ state, and others fall outside the range of meaningful states. We can define several tiers of abstraction in this way, and thereby attribute all kinds of intricate semantics to what’s going on in the computer. But from a strictly physical perspective, the only part of those meanings that’s actually there, are the causal relations. State x does cause state y, but state x is not intrinsically about the buildup of moisture in a virtual atmosphere, and state y is not intrinsically about rainfall. What is physically there, is a reconfigurable computational system designed to imitate the causality of whatever it’s simulating.
All of this is Scientific Philosophy of Mind 101. And because of modern neuroscience, people think they know that the human brain is just another form of the same thing, a physical system that contains a stack of virtual state machines; and they try to reason from that, to conclusions about the nature of consciousness. For example, that qualia must correspond to particular virtual states, and so that a simulation of a person can also be conscious, so long as the simulation is achieved by inducing the right virtual states in the simulator.
But—if I may simply jump to my own alternative philosophy—I propose that everything to do with consciousness, such as the qualia, depends directly on objective, exact, “microphysical” properties—which can include holistic properties like the topology of a fundamental knot, or the internal structure of an entangled quantum state. Mentally, psychologically, cognitively, the virtual states in a brain or a computer only tell us about things happening outside of its consciousness, like unconscious information processing.
This suggests a different kind of criterion for how much, and what kind of, consciousness there is in a simulation. For example, if we suppose that some form of entanglement is the physical touchstone of consciousness… then you may be simulating a person, but if your computer in base reality isn’t using entanglement to do so, then there’s no consciousness there at all.
Under this paradigm, it may still be possible e.g. to materialize a state of consciousness complete with the false impression that it had already been existing for a while. (Although it’s interesting that quantum informational states are subject to a wide variety of constraints on their production, e.g. the no-cloning theorem, or the need for “magic states” to run faster than a classical computer.) So there may still be epistemically disturbing possibilities that we’d have to come to terms with. But a theory of this nature at least assigns a robust reality to the existence of consciousness, qualia, and so forth.
I would not object to a theory of consciousness based solely on virtual states, that was equally robust. It’s just that virtual states, when you look at them from a microphysical perspective, always seem to have some fuzziness at the edges. Consider the computational interpretation of states of a transistor that I mentioned earlier. It’s a “0” if the electrons are all on one side, it’s a “1“ if they’re all on the other side, and it’s a meaningless state if it’s neither of those. But the problem is that the boundary between computational states isn’t physically absolute. If you have stray electrons floating around, there isn’t some threshold where it sharply and objectively stops being a “0” state, it’s just that the more loose electrons you have, the greater the risk that the transistor will fail to perform the causal role required for it to accurately represent a “0” in the dance of the logic gates.
This physical non-objectivity of computational states is my version of the problems that were giving you headaches in the earlier comment. Fortunately, I know there’s more to physics than Newtonian billard balls rebounding from each other, and that leads to some possibilities for a genuinely holistic ontology of mind.
None of that is obvious. It’s obvious that you would make the same reports, and that is all. If consciousness depends on real causality, or unsimulated physics, then it could fade out, *and you could notice that” , in the way you can notice drowsiness.
Whenever I notice something, there are probably some neurons in me doing this noticing activity, but in this thought experiment every neuron outputs the exact same signals.
I wouldn’t just make the same reports to other people, but each of my brain areas (or individual neurons) will make the same report to every other brain area.
I want to say that I am very uncertain about consciousness and internal experience and how it works. I do think that your position (it depends on some form of an internal model which computes a lot of details about oneself) feels more plausible than other ideas like “Integrated Information Theory.”
What drove me so insane that I utter the words “maybe consciousness and internal experience doesn’t exist,” is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
I can try to imagine the qualia being more “intense/deep” or more “faded,” but that seems so illogical, because the sensation of intensity and fadedness is probably just a quantity used to track how much I should care about an experience. If I took drugs which made everything feel so much more profound (or much more faded), I don’t think I’ll actually be many times more conscious (or less conscious). I’ll just care more (or less) about each experience.
The only way I can imagine consciousness, is to imagine that some things have a consciousness, some things do not have a consciousness, and that things with a consciousness experience the world, in that I could theoretically be them, and feel strange things or see strange things. And that things which do not have a consciousness don’t experience the world, and I couldn’t possibly be them.
If I was forced to think that consciousness wasn’t discrete, but existed on a spectrum (with some things being 50% or 10% as conscious as me), it would be just as counterintuitive as if consciousness didn’t exist at all, and it was just my own mind deciding which objects I should try to predict by imagining if I were them.
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)
Hmm. I think, there are two framings of consciousness.
Framing 1, is how aware I am of my situation, and how clear my memories are, and so forth.
Framing 2, is that for beings with a low level of consciousness, there is no inner experience, no qualia, only behavioural mechanisms.
In framing 1, I can imagine being less conscious, or more conscious. But it’s hard to imagine gradually being less and less conscious, until at some point I cease to have inner experience or qualia and I’m just a robot with behavioural mechanisms (running away from things I fear, seeking things which give me reward).
It’s the second framing, which I think might not exist, since imagining it’s on a continuum feels as hard as imagining it doesn’t exist.
To me, it doesn’t even need to be imagined. Everyone experienced partial consciousness, e.g.
Dreaming, where you have phenomenal awareness , but not of an external world.
Deliberate visualisation, which is less phenomenally vivid than perception in most people.
Drowsiness, states between sleep.and waking.
Autopilot and flow states , where the sense of a self deciding actions isn absent.
More rarely there are forms of heightened consciousness: peak experiences, meditations jñanas, psychedelic enhanced perceptions , etc.
You could instead conclude that the first principles are wrong, and that consciousness depends on something more than “neurons”, something that is inherently all or nothing.
Consider a knot. The knottedness of a knot is a complex unity. If you cut it anywhere, it’s no longer a knot.
Physics and mathematics contain a number of entities, from topological structures to nonfactorizable algebraic objects, which do not face the “sorites” problem of being on a spectrum.
The idea is not to deny that neurons are causally related to consciousness, but to suggest that the relevant physics is not just about atoms sticking together; that physical entities with this other kind of ontology may be part of it too. That could mean knots in a field, it could mean entangled wavefunctions, it could mean something we haven’t thought of.
But I feel a knot has to be made up of, a mathematically perfect line. If you tied a knot using real world materials like string, I could gradually remove atoms until you were forced to say “okay it seems to be halfway between a knot and not a knot.”
Another confusing property of consciousness, is imagine if you were a computer simulation. The computer simulation is completely deterministic: if the simulation was run twice, the result would be identical.
So now imagine the simulators run you once, and record a 3D video of exactly how all your atoms moved. In this first run, you would be conscious.
But now imagine they run you a second time. Would you still be conscious? I think most people who believe consciousness is real, would say yes you would.
But what if they simply replay the video from first time, without any cause and effect? Then most people would probably say you wouldn’t be conscious, you’re just a recording.
But what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording? Then they could adjust how conscious you were. How much cause and effect was occurring inside you. They could make some of your experiences more conscious. Some of your brain areas more conscious.
What would happen to your experience as certain brain areas were gradually replaced by recordings, and thus less conscious? What would happen to your qualia?
Well, you wouldn’t notice anything. Your experience will feel just the same. Until at some point. Somehow. You cease to have any experience, and you’re just a recording.
You would never feel the “consciousness juice” draining out of you. You will never think that your qualia is fading.
Instead, you will have a strong false conviction that you still have a lot of qualia, when in reality there is very little left.
But what if, all qualia was such a false conviction in the first place?
What if the quantity of experience/consciousness was subjective, and only existed in the map, not the territory?
Maybe there’s no such thing as qualia, but there is still such thing as happiness and sadness and wonder. Humans feel these things. Animals feel something analogous to it. There is no objective way to quantify the number of souls or amount of experience. But we have the instinct to care about other things, and so we do. We don’t care about insects more than fellow humans because they are greater in number and “contain most of the experience,” but we still care about them a tiny bit.
There are various ways you can get a knot at a fundamental level. It can be a knot in field lines, it can be a knot in a “topological defect”, it can be a knot in a fundamental string.
I don’t know if you’ve heard of it, but there is an algorithm for the cellular automaton “Game of Life”, called Hashlife, which functions like that. Hashlife remembers the state transitions arising from particular overall patterns, and when a Game of Life history is being computed, if a known pattern occurs, it just substitutes the memorized history rather than re-computing it from first principles.
So I take you to be asking, what are the implications for simulated beings, if Hashlife-style techniques are used to save costs in the simulation?
When thinking about simulated beings, the first thing to remember, if we are sticking to anything like a natural-scientific ontology, is that the ultimate truth about everything resides in the physics of the base reality. Everything that your computer does, consists of electrons shuttling among transistors. If there are simulated beings in your computer, that’s what they are made of.
If you’re simulating the Game of Life… at the level of software, at the level of some virtual machine, some cells may be updated one fundamental timestep at a time, by applying the basic dynamical rule; other cells may be updated en masse and in a timeskip, by retrieving a Hashlife memory. But either way, the fundamental causality is still electrons in transistors being pulled to and fro by electromagnetic forces.
Describing what happens inside a computer as 0s and 1s changing, or as calculation, or as simulation, is already an interpretive act that goes beyond the physics of what is happening there. It is the same thing as the fact that objectively, the letters on a printed page are just a bunch of shapes. But human readers have learned to interpret those shapes as propositions, descriptions, even as windows into other possible worlds.
If we look for objective facts about a computer that connect us to these intersubjective interpretations, there is the idea of virtual state machines. We divide up the distinct possible microphysical states, e.g. distributions of electrons within a transistor, and we say some distributions correspond to a “0” state, some correspond to a “1″ state, and others fall outside the range of meaningful states. We can define several tiers of abstraction in this way, and thereby attribute all kinds of intricate semantics to what’s going on in the computer. But from a strictly physical perspective, the only part of those meanings that’s actually there, are the causal relations. State x does cause state y, but state x is not intrinsically about the buildup of moisture in a virtual atmosphere, and state y is not intrinsically about rainfall. What is physically there, is a reconfigurable computational system designed to imitate the causality of whatever it’s simulating.
All of this is Scientific Philosophy of Mind 101. And because of modern neuroscience, people think they know that the human brain is just another form of the same thing, a physical system that contains a stack of virtual state machines; and they try to reason from that, to conclusions about the nature of consciousness. For example, that qualia must correspond to particular virtual states, and so that a simulation of a person can also be conscious, so long as the simulation is achieved by inducing the right virtual states in the simulator.
But—if I may simply jump to my own alternative philosophy—I propose that everything to do with consciousness, such as the qualia, depends directly on objective, exact, “microphysical” properties—which can include holistic properties like the topology of a fundamental knot, or the internal structure of an entangled quantum state. Mentally, psychologically, cognitively, the virtual states in a brain or a computer only tell us about things happening outside of its consciousness, like unconscious information processing.
This suggests a different kind of criterion for how much, and what kind of, consciousness there is in a simulation. For example, if we suppose that some form of entanglement is the physical touchstone of consciousness… then you may be simulating a person, but if your computer in base reality isn’t using entanglement to do so, then there’s no consciousness there at all.
Under this paradigm, it may still be possible e.g. to materialize a state of consciousness complete with the false impression that it had already been existing for a while. (Although it’s interesting that quantum informational states are subject to a wide variety of constraints on their production, e.g. the no-cloning theorem, or the need for “magic states” to run faster than a classical computer.) So there may still be epistemically disturbing possibilities that we’d have to come to terms with. But a theory of this nature at least assigns a robust reality to the existence of consciousness, qualia, and so forth.
I would not object to a theory of consciousness based solely on virtual states, that was equally robust. It’s just that virtual states, when you look at them from a microphysical perspective, always seem to have some fuzziness at the edges. Consider the computational interpretation of states of a transistor that I mentioned earlier. It’s a “0” if the electrons are all on one side, it’s a “1“ if they’re all on the other side, and it’s a meaningless state if it’s neither of those. But the problem is that the boundary between computational states isn’t physically absolute. If you have stray electrons floating around, there isn’t some threshold where it sharply and objectively stops being a “0” state, it’s just that the more loose electrons you have, the greater the risk that the transistor will fail to perform the causal role required for it to accurately represent a “0” in the dance of the logic gates.
This physical non-objectivity of computational states is my version of the problems that were giving you headaches in the earlier comment. Fortunately, I know there’s more to physics than Newtonian billard balls rebounding from each other, and that leads to some possibilities for a genuinely holistic ontology of mind.
None of that is obvious. It’s obvious that you would make the same reports, and that is all. If consciousness depends on real causality, or unsimulated physics, then it could fade out, *and you could notice that” , in the way you can notice drowsiness.
It’s not obvious but I think it’s probably true.
Whenever I notice something, there are probably some neurons in me doing this noticing activity, but in this thought experiment every neuron outputs the exact same signals.
I wouldn’t just make the same reports to other people, but each of my brain areas (or individual neurons) will make the same report to every other brain area.