The truth was always before our eyes:
Signer
Neuron count intuitively seems to be a better proxy for the variety/complexity/richness of positive experience. Then you can have an argument about how you wouldn’t want to just increase intensity of pleasure, that just a relative number. That what matters is that pleasure is interesting. And so you would assign lesser weights to less rich experience. You can also generalize this argument to negative experiences—maybe you don’t want to consider pain to be ten times worse just because someone multiplied some number by 10.
But I would think that the broader spectrum of potential sources for pleasure and pain would still not require a greater spectrum of intensity.
Isn’t pain in both wings worse than in one?
Russelian panpsychism doesn’t postulate a new force—physics already accepts casual role of existence: only existing neurons can fire.
And it explains epistemic link—it’s cogito ergo sum—you’re always right, when you think that universe exists.
And rock’s perception belongs to a rock.
Would anyone describe it as theirs? That access is reflective. It’s pretty difficult to retrieve data in a format you didn’t store it in.
But what if there is no access or self-description or retrieval? You just appear fully formed, stare at a wall for a couple of years and then disappear. Are you saying that describing your experience makes them retroactively conscious?
Even if I’m not thinking about myself consciously [ i.e., my self is not reflecting on itself ], I have some very basic perception of the wall as being perceived by me, a perceiver—some perception of the wall as existing in reference to me.
Is it you inspecting your experience or you making an inference from the “consciousness is self-awareness” theory? Because it doesn’t feel reflective to me? I think I just have a perception of a wall without anything being about me. It seems to be implementable by just forward pass streamed into short-term memory or something. If you just separated such a process and put it on repeat, just endlessly staring at a wall, I don’t see a reason why would anyone would describe it as reflective.
I mean, it is reflective in a sense that inner neurons observe outer neurons so in a sense it is a brain observing brain. But even rocks have connected inner layers.
The thing I don’t understand about claimed connection between self-model and phenomenal consciousness is that I don’t see much evidence for the necessity of self-model for conscious perception’s implementation—when I just stare at a white wall without internal dialog or other thoughts, what part of my experience is not implementable without self-model?
“Death is fine if AI doesn’t have self-preservation goal” or “suffering is bad” are also just human ethical assumptions.
You are talking about experience of certainty. I’m asking why do you trust it?
I know it’s beyond doubt because I am currently experiencing something at this exact moment.
That’s a description of a system, where your experience directly hijacks your feeling of certainty. You wouldn’t say that “I know it’s beyond doubt there is a blue sky, because blue light hits my eyes at this exact moment” is a valid justification for absolute certainty. Even if you feel certain about some part of reality, you can contemplate being wrong, right? Why don’t say “I’m feeling certain, but I understand the possibility of being wrong” the same way you can say about there being blue sky? The possibility is physically possible (I described it). It’s not even phenomenologically unimaginable—it would feel like misremembering.
Why insist on describing your experience as “knowledge”? It’s not like you have perfect evidence for a fact “experience is knowledge”, you just have a feeling of certainty.
And if seeing-neurons of someone’s brain are in the state of seeing red, but they are thinking and saying that they see blue, would you say they are right?
How do you know it’s beyond doubt? Why is your experience of blue sky is not guaranteed to be right about the sky, but your experience of certainty of experience is always magically right?
What specifically is beyond doubt, if seeing-neurons of your brain are in the state of seeing red, but you are thinking and saying that you see blue?
If a doctor asks a patient whether he is in pain, and the patient says yes, the doctor may question whether the patient is honest. But he doesn’t entertain the hypothesis that the patient is honest but mistaken.
Nothing in this situation uses certain self-knowledge of moment of experience. Patient can’t communicate it—communication takes time, so it can be spoofed. More importantly, if patient’s knowledge of pain is wrong in the same sense it can be wrong later (that patient says and thinks that they are not in pain, but they actually are and so have perfectly certain knowledge of being in pain, for example), the doctor should treat it the same way as patient misremembering the pain. Because the doctor cares about the state of patient’s brain, not their perfectly certain knowledge. Because calling “being in a state” “knowledge” is epiphenomenal.
Another way to illustrate this, is that you can’t describe your pain with perfect precision, you can’t perfectly tell apart levels of pain. So if you can’t be sure which pain you are feeling, why insist you are sure you are feeling pain instead of pressure? What exactly you are sure about?
And, obviously, the actual reason doctors don’t worry about it in practice, is that it’s unlikely, not because it’s impossible.
though since it is about an external fact, it is itself not sufficient for knowledge.
What does “external” mean? Can I answer the doctor everything about chemical composition of air if I decide air is a part of me? Can I be wrong about temperature of my brain? About me believing that a supermarket is around the corner?
I think it’s the opposite: there is no reason to hypothesize that you need a second, additional mental state in order to know that you are in the first mental state.
One reason is that is how every other knowledge works—one thing gains knowledge about other by interacting with it. Another reason—perfectly certain self-knowledge works differently. And we already have contradiction-free way to describe it—“being in a state”. Really, the only reason for calling it perfectly certain knowledge is unjustified intuition.
Another reason is that it’s not really just a hypothesis, when you in fact have parts other than some specific qualia. And these other parts implement knowledge in the way that allows it to be wrong the same way memories can be wrong. So you’ll have potentially wrong knowledge about qualia anyway—defining additional epiphenomenal perfectly certain self-knowledge wouldn’t remove it.
You’ve seen 15648917, but later you think it was 15643917. You’re wrong, because actually the state of your neurons was of (what you are usually describe as) seeing 15648917. If in the moment of seeing 15648917 (in the moment, when your seeing-neurons are in the state of seeing 15648917) you are thinking that you see 15643917 (meaning your thinking-neurons are in the state of thinking that you see 15643917 ), then you are wrong in same way you may be wrong later. It works the same way the knowledge about everything works.
You can define “being in the state of seeing 15648917” as “knowing you are seeing 15648917″, but there is no reason to do it, you will get unnecessary complications, you can’t use this knowledge, it doesn’t work like knowledge—because it’s not knowing about state, it’s being in a state.
it’s the only thing I can know for certain
You can’t be certain in any specific quale: you can misremember what you were seeing, so there is external truth-condition (something like “these neurons did such and such things”), so it is possible in principle to decouple your thoughts of certainty from what actually happened with your experience. So illusionism is at least right that your knowledge of your qualia is imperfect and uncertain.
Even if it’s incomplete in that way, it doesn’t have metaphysical implications.
Therefore Mary’s incomplete knowledge about consciousness doesn’t have metaphysical implications, because it is incomplete in fundamentally same way.
Mary doesn’t know what colour qualms look.like, and therefore has an incomplete understanding of consciousness.
Mary doesn’t know how to ride, and therefore has incomplete understanding of riding. What’s the difference?
Both need instantiation for what?
For gaining potential utility from specific knowledge representations, for knowledge that feels intuitively complete. I guess “you can’t learn to ride in your room” requirement is not exactly and only instantiation? Anyway, the intended general category is “useful knowledge representations”.
Unhelpful
I mean all of them: if physicalism explains riding a bike (physical equations give knowledge in some form and also predict you gaining new knowledge representation when you actually learn to ride), then it explains it’s appearance in analogous way (physical equations give knowledge about bike’s appearance in some form and also predict you gaining new knowledge representation when you actually see it).
Bikes aren’t appearances , so there is no analogy.
The analogy is that they both need instantiation. That’s the thing about appearances that is used in the argument.
Know-how, such as riding kills, is not an appearance, or physical.knowledge.
So physicalism is false, because physical knowledge is incomplete without know-how.
Nonetheless , there is a difference.
Sure, they are different physical processes. But what’s the relevant epistemological difference? If you agree that Mary is useless we can discuss whether there are ontological differences.
Riding bikes? How they work? How they appear?
Yes.
in the case of experience, and only in the case of experience
The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.
Again, this is false—it is as much as necessary in case of riding. And differences between knowing about qualia and knowing about fusion are explained by preferences: humans just don’t care about or need instantiating fusion, but care about instantiating red. In both cases you are physically affected and so you (can define knowledge in such a way that you) gain new representation of knowledge by instantiation.
What it looks like is the representation! A different representation just isn’t a quale. #FF0000 just isnt a red quale!
But reading a book on riding a bike isn’t knowing how to tide a bike...you get the knowledge from mounting a bike and trying!
The knowledge of representation is the whole thing! Qualia are appearances!
If you want to define things that way, ok. So Mary’s room implies that bikes are as unphysical as qualia.
It bypasses what you are calling representation … you have admitted that.
Mary also doesn’t have all representations for all physical knowledge. She doesn’t have to have a concept of fire, or equations in all possible notations, or riding skills.
That doesn’t mean there isn’t a difference between different kinds of knowing.
Mary’s room doesn’t provide motivation for there being a fundamental difference between knowing how to ride a bike and knowing what it is like to see red. And physicalism explains bikes, right?
The physics equations representing a brain don’t contain qualia then, since they don’t exist as a brain.
Yes, of course, like they don’t contain atoms or fire or whatever. Reality they describe contains them. Well, except equations are physical objects, so you can write equations with brains or something, but it’s not relevant.
And additional representation of red in Mary’s brain after she sees it is also doesn’t contain her being in a state of seeing red.
If Mary looks at these equations ,in her monochrome room, does she go into the brain state that instantiates seeing something red?
No.
Does she somehow finds out what red looks like without that?
Yes.
What does that mean? Are you saying Mary already knew what red looks like, and instantiating the brain state adds no new knowledge?
She already knew what red looks like, the knowledge just was in a different representation. Just like with knowing how to ride a bike. “no new”, like everything here, depends on definitions. But she definitely undergoes physical change, that may be viewed as her gaining new representation of knowledge, which may be valuable for her.
Mary can “predict pixels” in some sense that bypasses her knowing what colour qualia look like. Just as a blind person can repeat, without understanding , that tomatoes look red, Mary can state that such and such a brain state would have an RGB value of #FF0000 at such and such a pixel. #FF0000 is a symbol for something unknown to here, just as much as r-e-d. So it’s not a prediction of a quale in the relevant sense.
Mary can “predict muscles” in some sense that bypasses her knowing how to ride a bike. Just as a boring person can repeat, without understanding, that bikes are fast, Mary can state that such and such a muscle state would have an XYZ value of (1,2,3) at such and such a cell. (1,2,3) is a symbol for something unknown to her, just as much as b-i-k-e. So it’s not a prediction of riding in the relevant sense.
Without additional assumptions, predicting pixels doesn’t bypasses anything more, than predicting atoms bypasses knowing fire. You can separately intuitively assume that there is a difference, but what’s unique about Mary’s situation is resolvable by just noting that there is a difference between being and knowing.
Why is red is “something unknown to her”? If to answer it you need additional arguments, like zombies, then you don’t need Mary. And the answer is not “because she gains knowledge by instantiating” because it’s the same way with bikes.
Why?
I mean that from qualia requiring you to exist in a specific state you can make a jump to “consciousness is existence”.
If you don’t think there is an HP, because of Mary’s Room, why do you think there is an HP?
Because of the Zombies Argument. “What part of physical equations says our world is not a zombie-world?” is a valid question. The answer to “What part of physical equations says what red looks like?” is just “the part that describes brain”.
It’s supposed to indicate that there is a hard problem , ie. that that even a super scientist cannot come up with a reductive+predictive theory of qualia.
It doesn’t indicate it independently of other assumptions. Mary’s situation only implies that you should track the difference between knowing and being (which I guess is a hint that panpsychism solves HP) - it doesn’t say there is something wrong with “Mary has a reductive+predictive theory of qualia, which is confirmed by her ability to predict every pixel of her future experience”.
What red looks like, as stated in all forms of the story.
Mary can say “it looks like <pixel array>”.
First, you can still infer meta-representation from your behavior. Second, why does it matter that you represent aversiveness, what’s the difference? Representation of aversiveness and representation of damage are both just some states of neurons that model some other neurons (representation of damage still implies possibility of modeling neurons, not only external state, because your neurons are connected to other neurons).
I understand that, but I’m still asking why subliminal stimuli are not morally relevant for you? They may still create disposition to act in aversive way, so there is still mechanism in some part the brain/neural network that causes this behaviour and has access to the stimulus—what’s the morally significant difference between a stimulus being in some neurons and being in others, such that you call only one location “awareness”?
Even if you have poor heuristics, it’s still may be worth it to google/open docs and walk obvious links. The point is not to have an algorithm that certainly finds everything relevant, but to try many things that may work.
You speed up time. Or more generally prepare an environment that increases reproduction frequency, like slow hardware/network or higher load. You spam clicks and interrupt every animation, because all bugs are about asynchronous things. You save state after reproduction, or better before and start from it. If all fails, you add logs/breakpoints to be ready next week. But usually you just look at code to figure out paths that may manifest as your bug and then try to reproduce promising paths.