I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.
Thank you for this discussion.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Again, thank you for the discussion.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.