I don’t understand how illusionists can make the claims they do (and a quick ramble about successionists).
The main point for this being that I am experiencing qualia right now and ultimately it’s the only thing I can know for certain. I know that me saying “I experience qualia and this is the only true fact I can prove form certain about the universe” isn’t verifiable from the outside, but certainly other people experience the exact same thing? Are illusionists, and people who claim qualia doesn’t exist in general P-Zombies?
As for successionists, and honestly utilitarians in general, but only when they apply it to situations which result in their own deaths, I cannot understand this viewpoint. I don’t particularly care if the AI that wipes us out is conscious or not, or experiences tons of fun or not, or frankly even if anyone who continues to exist after I die has fun or not or dies or not, because I will be dead, and at that point, from my prospective, the universe may as well not exist anymore. I get this is an incredibly ‘selfish’ take, and would not form a good foundation to build a society on if everyone held this view (at least until there were viable levers to pull on to make immortality possible anyway) but I find it really strange I don’t see this view being expressed by anyone else?
Why not go one step further and say “I don’t care about myself tomorrow?” For all I know, only me today really exists for sure. Tomorrow me is someone else, I only have a false anticipation of this person’s experiences, because evolution programmed me to care about my genes, and falsely believing I will experience what tomorrow me experiences will make me better serve my genes.
But I won’t actually experience tomorrow me’s experiences, I’ll be gone.
In fact, it’s not just tomorrow me. It’s the me of next second.
In my experience, I end up being the myself of the next day/second/moment, or at least experience that being so, so it makes sense to continue to assume I will be the next moment’s me since that is what I observe of the past, or at least that’s what my memory says anyway, and I gain nothing by not ‘going along with it’.
I think a lot of discussion around what you should consider your successor is way, way too complex and completely misses what is actually going on. Your ‘future self’ is whatever thing you end up seeing out of the eyes of, regardless of what values or substrate or whatever it happens to have. If you experience from it’s POV, that’s you.
I agree it is extremely complex, but I don’t agree it completely misses what is actually going on.
You were earlier arguing that,
“The main issue I have is that, especially in the case of succession but in general too, I see that situations are often evaluated from some outside viewpoint which continues to be able to experience the situation rather than from the individual itself, which while necessary to stop the theorizing after the third sentence, isn’t what would ‘really happen’ down here in the real world.”
This implies you are certain that you will not experience anything after the individual you dies, yet you strongly believe that you will experience what future you will experience.
So it’s relevant to ask the question of, where do you draw the line between future you, vs. someone else? What if future you gets Alzheimer’s, and forgets 80% of your memories, making him no different than someone else? But then scientists invent a technology that cures aging (and Alzheimer’s), so future you lives a very long and wonderful life? Do you anticipate this person’s experiences?
Where do you draw the line? What if he forgets only 10% of your memories, or forgets 99.9% of your memories? What if someone completely separate from you, reads about your life, and delusionally believes that you are his past self? And gains 80% of your memories despite originally being someone else?
My opinion is that even if qualia in the present moment is more than an illusion, your decision for which qualia to anticipate in the future, is subjective. Any apparently objective rule for which qualia you should anticipate experiencing in the future, is an illusion. Any objective graph of consciousness (which dictates when each person’s consciousness should flow to his/her “future self,” and when each person’s consciousness “dies,”) starts to become nonsensical as soon as you consider these “edge cases” between future you and someone else.
What if future you gets Alzheimer’s, and forgets 80% of your memories, making him no different than someone else?
The answer to this is super straight forward; Do I continue experiencing qualia from the point of view of this future me? If yes, then absolutely nothing else matters, that’s me. If at some point during the Alzheimer’s I stop experiencing (permanently) then that isn’t me. If at some point after that I begin experiencing again, then whether or not ‘I’ died or was just unconscious is semantics. Memory doesn’t matter, the only thing that matters is the current experience I am having, as that is the only thing I can prove to exist.
If at some point during the Alzheimer’s I stop experiencing (permanently) then that isn’t me. If at some point after that I begin experiencing again, then whether or not ‘I’ died or was just unconscious is semantics.
When you say that dying vs. being unconscious is just semantics, that means you will experience future you’s qualia, even if he temporarily stops experiencing qualia, and loses 80% of your memories, right?
But what if future you loses 100% of your memories? Imagine it’s not just Alzheimer’s, but that the atoms of your brain are literally scrambled, and then rearranged to be identical to Obama’s brain. Would you continue to experience qualia, (with false memories of being Obama)?
If the answer is yes, then what if you literally die. But then the atoms of your dead body are absorbed by plants, which then get eaten by someone else, who gives birth to a baby. Now suppose this was done in a way most of your atoms eventually end up in this child as he grows up. Will you continue to experience his qualia?
The key question is, how badly do your atoms need to be scrambled, before the person they form no longer counts as “you,” and you won’t experience the qualia that he experiences? Do you agree that there is no objective answer? (And therefore, it’s not that unreasonable to anticipate experiences after death)
When you say that dying vs. being unconscious is just semantics, that means you will experience future you’s qualia, even if he temporarily stops experiencing qualia, and loses 80% of your memories, right?
To me, death is permanent loss, unconsciousness is temporary loss.
But what if future you loses 100% of your memories? Imagine it’s not just Alzheimer’s, but that the atoms of your brain are literally scrambled, and then rearranged to be identical to Obama’s brain. Would you continue to experience qualia, (with false memories of being Obama)?
No idea, that’s physics’s question, not philosophy. I think if it was a gradual process then probably, yeah, that’s basically what already happens.
get eaten by someone else, who gives birth to a baby. Now suppose this was done in a way most of your atoms eventually end up in this child as he grows up. Will you continue to experience his qualia?
Probably not, but if yes, but if there are no memories of my ‘past’ life it’s impossible for me to know if I had a previous set of memories.
The key question is, how badly do your atoms need to be scrambled, before the person they form no longer counts as “you,” and you won’t experience the qualia that he experiences? Do you agree that there is no objective answer?
Again, this is a physics question, not philosophy, but I believe there will some day be an objective answer to what’s going on with consciousness, I’m partial to naturalistic dualism or some sort of emergent property of algorithms in general, like IIT (Though IIT only says how it can be measured, not what it actually is?)
I suspect that IIT (and other theories of consciousness) will only say how conscious you are at a given moment. It won’t say whether or not that future person “really is you” or not, since that is subjective. It’s your choice which person in the future counts as “you.”
But anyways, I think we agree on one thing: it’s very uncertain whether or not you experience the qualia of someone else after you die. My opinion is that it’s completely subjective, your opinion is that it depends on physics and you don’t know, but we both agree that it’s complicated.
Given that it’s complicated, it seems like a very good hedge to care about what happens to humanity after you die. After all, the future may be full of great wonder so deep and long, that the present will seem relatively fleeting.
:) I thought your last comment admitted that you were quite uncertain whether “the experience of qualia will resume,” after you die and your atoms are eventually rearranged into other conscious beings.
I’m saying that if there’s a chance you will continue to experience the future, it’s worth caring about it.
Your ‘future self’ is whatever thing you end up seeing out of the eyes of, regardless of what values or substrate or whatever it happens to have. If you experience from it’s POV, that’s you.
Isn’t this circular? What counts as “you” is precisely what’s at issue here. (If I’m missing the point, maybe you can make your position more concrete, e.g. by explaining how it resolves some controversial cases.)
The main point for this being that I am experiencing qualia right now and ultimately it’s the only thing I can know for certain. I know that me saying “I experience qualia and this is the only true fact I can prove form certain about the universe” isn’t verifiable from the outside, but certainly other people experience the exact same thing? Are illusionists, and people who claim qualia doesn’t exist in general P-Zombies?
Well, something is definitely going on. But I think a very reasonable position, which often overlaps with illusionism, is that lots of people are constructing the wrong mental model of what’s going on. (Or making bad demands of what a good mental model should be.)
It’s a bit similar to using a microscope to do science that leads you to make a mental model of the world in which “microscope” is not an ontologically basic or privileged thing.
You can’t be certain in any specific quale: you can misremember what you were seeing, so there is external truth-condition (something like “these neurons did such and such things”), so it is possible in principle to decouple your thoughts of certainty from what actually happened with your experience. So illusionism is at least right that your knowledge of your qualia is imperfect and uncertain.
My memory can be completely false, I agree, but ultimately the ‘experience of experiencing something’ I’m experiencing at this exact moment IS real beyond any doubt I could possibly have, even if the thing I’m experiencing isn’t real (such as a hallucination, or reality itself if there’s some sort of solipsism thing going on).
How do you know it’s beyond doubt? Why is your experience of blue sky is not guaranteed to be right about the sky, but your experience of certainty of experience is always magically right?
What specifically is beyond doubt, if seeing-neurons of your brain are in the state of seeing red, but you are thinking and saying that you see blue?
I know it’s beyond doubt because I am currently experiencing something at this exact moment. Surely you experience things as well and know exactly what I’m talking about. There are no set of words I could use to explain this any better.
You are talking about experience of certainty. I’m asking why do you trust it?
I know it’s beyond doubt because I am currently experiencing something at this exact moment.
That’s a description of a system, where your experience directly hijacks your feeling of certainty. You wouldn’t say that “I know it’s beyond doubt there is a blue sky, because blue light hits my eyes at this exact moment” is a valid justification for absolute certainty. Even if you feel certain about some part of reality, you can contemplate being wrong, right? Why don’t say “I’m feeling certain, but I understand the possibility of being wrong” the same way you can say about there being blue sky? The possibility is physically possible (I described it). It’s not even phenomenologically unimaginable—it would feel like misremembering.
Why insist on describing your experience as “knowledge”? It’s not like you have perfect evidence for a fact “experience is knowledge”, you just have a feeling of certainty.
And if seeing-neurons of someone’s brain are in the state of seeing red, but they are thinking and saying that they see blue, would you say they are right?
You’ve seen 15648917, but later you think it was 15643917. You’re wrong, because actually the state of your neurons was of (what you are usually describe as) seeing 15648917. If in the moment of seeing 15648917 (in the moment, when your seeing-neurons are in the state of seeing 15648917) you are thinking that you see 15643917 (meaning your thinking-neurons are in the state of thinking that you see 15643917 ), then you are wrong in same way you may be wrong later. It works the same way the knowledge about everything works.
You can define “being in the state of seeing 15648917” as “knowing you are seeing 15648917″, but there is no reason to do it, you will get unnecessary complications, you can’t use this knowledge, it doesn’t work like knowledge—because it’s not knowing about state, it’s being in a state.
I disagree. Knowing that I’m in pain doesn’t require an additional and separate mental state about this pain that could be wrong. My being in pain is already sufficient for my knowledge of pain, so I can’t be mistaken about being in pain, or about currently having some other qualia.
If a doctor asks a patient whether he is in pain, and the patient says yes, the doctor may question whether the patient is honest. But he doesn’t entertain the hypothesis that the patient is honest but mistaken. We don’t try to convince people who complain about phantom pains that they are actually not in pain after all. More importantly, the patient himself doesn’t try to convince himself that he isn’t in pain, because that would be pointless, even though he strongly wishes it to be true.
You can define “being in the state of seeing 15648917” as “knowing you are seeing 15648917″, but there is no reason to do it, you will get unnecessary complications
I think it’s the opposite: there is no reason to hypothesize that you need a second, additional mental state in order to know that you are in the first mental state.
because it’s not knowing about state, it’s being in a state.
All knowing involves being in a state anyway, even in other cases where you have knowledge about external facts. Knowing that a supermarket is around the corner requires you to believe that a supermarket is around the corner. This belief is a kind of mental state; though since it is about an external fact, it is itself not sufficient for knowledge. Having such a belief about something (like a supermarket) is not sufficient for its truth, but having an experience of something is.
Knowing that I’m in pain doesn’t require an additional and separate mental state about this pain that could be wrong.
Well, remember that illusion when you see a rubber hand, then some guy strikes it with a hammer and you recoil, perturbed and confused if you are in pain or not. Or sometimes you can notice that you are in pain, and in fact was in pain for some time, just not paid attention to it, but obvious as you look at your memory now.
I never did the rubber hand test, but recoiling from something doesn’t mean you believe you are in pain. (It doesn’t even necessarily mean you believe you have been injured, as you may recoil just from seeing something gross.) And the confusion from the rubber hand illusion presumably comes from something like a) believing that your hand has been physically injured and b) not feeling any pain. Which is inconsistent information, but it doesn’t show that you can be wrong about being in pain.
About (not) noticing pain: I think attention is a degree of consciousness, and that in this case you did pay some attention, just not complete attention. So you experienced the pain to a degree, and you knew the pain to the same degree. So this isn’t an example of being in pain without knowing it. Nor of falsely believing you’re not in pain.
There is a related case in which you may not be immediately able to verbalize something you know. But that doesn’t mean you didn’t know it, only that not all knowledge is propositional, and that knowing a propositional form isn’t necessary for knowing the what or how.
Feels like yet again the distinction of “starting from where we factor out everything else”. I’m more uncertain about that than most proponents of both camps.
Maybe at some point it makes sense to say that I’m confused about what I feel, what my qualia are? But you have the uhh intention to go deeper and say “oh, you are confused, so you feel confusion instead of whatever you are confused about, case closed”. KInda god of the gaps style.
Also might be that the most cases of feeling things are very apparent, so that weird examples are rare and weird. Most feelings are like 2 + 2 = 4, you would have trouble imagining being confused or wrong about it. When you can readily imagine being confused about 74389 + 37423 = 113812
Then illusionist says “I have no idea what are you talking about, yes, pain and redness I feel them, but what’s so interesting about that, metaphysically? they are just states of my brain, duh, nothing special”. And I think it’s kinda weird that I’m here and feeling things? like, it just feels weird that it’s a thing?
If a doctor asks a patient whether he is in pain, and the patient says yes, the doctor may question whether the patient is honest. But he doesn’t entertain the hypothesis that the patient is honest but mistaken.
Nothing in this situation uses certain self-knowledge of moment of experience. Patient can’t communicate it—communication takes time, so it can be spoofed. More importantly, if patient’s knowledge of pain is wrong in the same sense it can be wrong later (that patient says and thinks that they are not in pain, but they actually are and so have perfectly certain knowledge of being in pain, for example), the doctor should treat it the same way as patient misremembering the pain. Because the doctor cares about the state of patient’s brain, not their perfectly certain knowledge. Because calling “being in a state” “knowledge” is epiphenomenal.
Another way to illustrate this, is that you can’t describe your pain with perfect precision, you can’t perfectly tell apart levels of pain. So if you can’t be sure which pain you are feeling, why insist you are sure you are feeling pain instead of pressure? What exactly you are sure about?
And, obviously, the actual reason doctors don’t worry about it in practice, is that it’s unlikely, not because it’s impossible.
though since it is about an external fact, it is itself not sufficient for knowledge.
What does “external” mean? Can I answer the doctor everything about chemical composition of air if I decide air is a part of me? Can I be wrong about temperature of my brain? About me believing that a supermarket is around the corner?
I think it’s the opposite: there is no reason to hypothesize that you need a second, additional mental state in order to know that you are in the first mental state.
One reason is that is how every other knowledge works—one thing gains knowledge about other by interacting with it. Another reason—perfectly certain self-knowledge works differently. And we already have contradiction-free way to describe it—“being in a state”. Really, the only reason for calling it perfectly certain knowledge is unjustified intuition.
Another reason is that it’s not really just a hypothesis, when you in fact have parts other than some specific qualia. And these other parts implement knowledge in the way that allows it to be wrong the same way memories can be wrong. So you’ll have potentially wrong knowledge about qualia anyway—defining additional epiphenomenal perfectly certain self-knowledge wouldn’t remove it.
As for successionists, and honestly utilitarians in general, but only when they apply it to situations which result in their own deaths, I cannot understand this viewpoint.
Quite a few people would die to save their children. I actually have never met someone who told me they can’t relate to any thought experiment where they would die for someone else. I would be curious if you can relate to any of the thought experiments in the fake selfishness post. Presumably there are quite a few people who would not sacrifice their entire future for anything, but you can also just sacrifice it partially by taking risks like driving a car.
The main issue I have is that, especially in the case of succession but in general too, I see that situations are often evaluated from some outside viewpoint which continues to be able to experience the situation rather than from the individual itself, which while necessary to stop the theorizing after the third sentence, isn’t what would ‘really happen’ down here in the real world.
In the case of dying to save my children (Do not currently have any or plan to, but for the hypothetical) I would not, though I am struggling to properly articulate my reasoning besides saying “if I’m dead I can’t see my children anyway” which doesn’t feel like a solid enough argument or really align with my thoughts completely.
An example given in the selfishness post is either dying immediately to save the rest of humanity, or living another year than all humanity dies, and in that case I would pick to die, since ultimately the outcome is the same either way (I die) but on the chance the universe continues to exist after I die (I think this is basically certain) the rest of humanity would be fine. And on a more micro-level, living knowing that I and everyone else have one year left to live, and that it’s my fault, sounds utterly agonizing.
And on a more micro-level, living knowing that I and everyone else have one year left to live, and that it’s my fault, sounds utterly agonizing.
Earlier you say:
or frankly even if anyone who continues to exist after I die has fun or not or dies or not, because I will be dead, and at that point, from my prospective, the universe may as well not exist anymore.
How are these compatible? You don’t care if all other humans die after you die unless you are responsible?
That’s pretty much it! If everyone in the world was set to die four minutes after I died, and this was just an immutable fact of the universe, then that would be super unfortunate, but oh well, I can’t do anything about it, so I shouldn’t really care that much. In the situation in which I more directly cause/choose, not only have I cut my and everyone else’s lives short to just a year, I also am directly responsible, and could have chosen to just not do that!
I don’t have the same dismissal of everyone after my death-boundary. (Also, I’m a theist, so it ain’t the same degree of “boundary” for me.)
But! I do strongly feel, when discussing nonconsensual AI succession of humans, a prerational, vigorous hatred and dismissal for whatever beings succeed is. Do they have profound interiority as worthy moral patients? Are they having fun and loving each other? Don’t care! “Fuck ‘em!” is the phrase that comes to mind. “They killed me and everyone I love—fuck ‘em!” This doesn’t accord with my frontal-cortex moral philosophy, but I have a collection of beliefs that I’ve realized precede any thinking and won’t be altered. Among them: “Art has intrinsic value apart from the experiences it induces in beings’ minds,” “Academic dishonesty is a serious sin and ought to be a high cultural taboo,” and—of anyone who has deeply wronged those I love—for instance, someone who has raped my friend, or an entity, no matter how worthy, that has destroyed someone I love—although I try to cultivate forgiveness in even the most extreme of scenarios—”Fuck ’em!!” This animus extends to would-be handmaidens of the successor.
I don’t expect to integrate any of these instincts.
And as for consensual AI succession of humans—I feel a similarly strong prerational revulsion.
This isn’t meant to be interpreted as a call to violence in the slightest.… But, why haven’t there been more ‘terroristic’ actions towards firms developing AI systems? Why haven’t any datacenters been firebombed or anti-regulation proponents been shot on the street? I mean if this is a “We literally all die if this goes poorly, possibly if it happens at all.” The cost of a few human lives, or jail time/death penalty/torture for the perpetrator seems like a bargain for getting more time/raising significant awareness/etc
It is indeed surprising, because it indicates much more sanity I would otherwise expected.
Terrorism is not effective. The only ultimate result of 9/11 from perspective of bin Laden goals was “Al Qaeda got wiped out of the face of Earth and rival groups have replaced it”. The only result of firebombing datacenter would be “every single personality in AI safety gets branded terrorist, destroying literally any chance to influence relevant policy”.
I don’t understand how illusionists can make the claims they do (and a quick ramble about successionists).
The main point for this being that I am experiencing qualia right now and ultimately it’s the only thing I can know for certain. I know that me saying “I experience qualia and this is the only true fact I can prove form certain about the universe” isn’t verifiable from the outside, but certainly other people experience the exact same thing? Are illusionists, and people who claim qualia doesn’t exist in general P-Zombies?
As for successionists, and honestly utilitarians in general, but only when they apply it to situations which result in their own deaths, I cannot understand this viewpoint. I don’t particularly care if the AI that wipes us out is conscious or not, or experiences tons of fun or not, or frankly even if anyone who continues to exist after I die has fun or not or dies or not, because I will be dead, and at that point, from my prospective, the universe may as well not exist anymore. I get this is an incredibly ‘selfish’ take, and would not form a good foundation to build a society on if everyone held this view (at least until there were viable levers to pull on to make immortality possible anyway) but I find it really strange I don’t see this view being expressed by anyone else?
Why not go one step further and say “I don’t care about myself tomorrow?” For all I know, only me today really exists for sure. Tomorrow me is someone else, I only have a false anticipation of this person’s experiences, because evolution programmed me to care about my genes, and falsely believing I will experience what tomorrow me experiences will make me better serve my genes.
But I won’t actually experience tomorrow me’s experiences, I’ll be gone.
In fact, it’s not just tomorrow me. It’s the me of next second.
In my experience, I end up being the myself of the next day/second/moment, or at least experience that being so, so it makes sense to continue to assume I will be the next moment’s me since that is what I observe of the past, or at least that’s what my memory says anyway, and I gain nothing by not ‘going along with it’.
I think a lot of discussion around what you should consider your successor is way, way too complex and completely misses what is actually going on. Your ‘future self’ is whatever thing you end up seeing out of the eyes of, regardless of what values or substrate or whatever it happens to have. If you experience from it’s POV, that’s you.
I agree it is extremely complex, but I don’t agree it completely misses what is actually going on.
You were earlier arguing that,
This implies you are certain that you will not experience anything after the individual you dies, yet you strongly believe that you will experience what future you will experience.
So it’s relevant to ask the question of, where do you draw the line between future you, vs. someone else? What if future you gets Alzheimer’s, and forgets 80% of your memories, making him no different than someone else? But then scientists invent a technology that cures aging (and Alzheimer’s), so future you lives a very long and wonderful life? Do you anticipate this person’s experiences?
Where do you draw the line? What if he forgets only 10% of your memories, or forgets 99.9% of your memories? What if someone completely separate from you, reads about your life, and delusionally believes that you are his past self? And gains 80% of your memories despite originally being someone else?
My opinion is that even if qualia in the present moment is more than an illusion, your decision for which qualia to anticipate in the future, is subjective. Any apparently objective rule for which qualia you should anticipate experiencing in the future, is an illusion. Any objective graph of consciousness (which dictates when each person’s consciousness should flow to his/her “future self,” and when each person’s consciousness “dies,”) starts to become nonsensical as soon as you consider these “edge cases” between future you and someone else.
The answer to this is super straight forward; Do I continue experiencing qualia from the point of view of this future me? If yes, then absolutely nothing else matters, that’s me. If at some point during the Alzheimer’s I stop experiencing (permanently) then that isn’t me. If at some point after that I begin experiencing again, then whether or not ‘I’ died or was just unconscious is semantics. Memory doesn’t matter, the only thing that matters is the current experience I am having, as that is the only thing I can prove to exist.
When you say that dying vs. being unconscious is just semantics, that means you will experience future you’s qualia, even if he temporarily stops experiencing qualia, and loses 80% of your memories, right?
But what if future you loses 100% of your memories? Imagine it’s not just Alzheimer’s, but that the atoms of your brain are literally scrambled, and then rearranged to be identical to Obama’s brain. Would you continue to experience qualia, (with false memories of being Obama)?
If the answer is yes, then what if you literally die. But then the atoms of your dead body are absorbed by plants, which then get eaten by someone else, who gives birth to a baby. Now suppose this was done in a way most of your atoms eventually end up in this child as he grows up. Will you continue to experience his qualia?
The key question is, how badly do your atoms need to be scrambled, before the person they form no longer counts as “you,” and you won’t experience the qualia that he experiences? Do you agree that there is no objective answer? (And therefore, it’s not that unreasonable to anticipate experiences after death)
To me, death is permanent loss, unconsciousness is temporary loss.
No idea, that’s physics’s question, not philosophy. I think if it was a gradual process then probably, yeah, that’s basically what already happens.
Probably not, but if yes, but if there are no memories of my ‘past’ life it’s impossible for me to know if I had a previous set of memories.
Again, this is a physics question, not philosophy, but I believe there will some day be an objective answer to what’s going on with consciousness, I’m partial to naturalistic dualism or some sort of emergent property of algorithms in general, like IIT (Though IIT only says how it can be measured, not what it actually is?)
I suspect that IIT (and other theories of consciousness) will only say how conscious you are at a given moment. It won’t say whether or not that future person “really is you” or not, since that is subjective. It’s your choice which person in the future counts as “you.”
But anyways, I think we agree on one thing: it’s very uncertain whether or not you experience the qualia of someone else after you die. My opinion is that it’s completely subjective, your opinion is that it depends on physics and you don’t know, but we both agree that it’s complicated.
Given that it’s complicated, it seems like a very good hedge to care about what happens to humanity after you die. After all, the future may be full of great wonder so deep and long, that the present will seem relatively fleeting.
So? If I’m not there to experience it, and it can’t affect me in any way, it may as well not exist at all.
:) I thought your last comment admitted that you were quite uncertain whether “the experience of qualia will resume,” after you die and your atoms are eventually rearranged into other conscious beings.
I’m saying that if there’s a chance you will continue to experience the future, it’s worth caring about it.
If I come back, then I wasn’t dead to begin with, and I’ll start caring then. Until then, the odds are low enough that it doesn’t matter.
Isn’t this circular? What counts as “you” is precisely what’s at issue here. (If I’m missing the point, maybe you can make your position more concrete, e.g. by explaining how it resolves some controversial cases.)
Well, something is definitely going on. But I think a very reasonable position, which often overlaps with illusionism, is that lots of people are constructing the wrong mental model of what’s going on. (Or making bad demands of what a good mental model should be.)
It’s a bit similar to using a microscope to do science that leads you to make a mental model of the world in which “microscope” is not an ontologically basic or privileged thing.
This discussion has been had many times before on LessWrong. I suggested taking Why it’s so hard to talk about Consciousness as a starting point.
You can’t be certain in any specific quale: you can misremember what you were seeing, so there is external truth-condition (something like “these neurons did such and such things”), so it is possible in principle to decouple your thoughts of certainty from what actually happened with your experience. So illusionism is at least right that your knowledge of your qualia is imperfect and uncertain.
My memory can be completely false, I agree, but ultimately the ‘experience of experiencing something’ I’m experiencing at this exact moment IS real beyond any doubt I could possibly have, even if the thing I’m experiencing isn’t real (such as a hallucination, or reality itself if there’s some sort of solipsism thing going on).
How do you know it’s beyond doubt? Why is your experience of blue sky is not guaranteed to be right about the sky, but your experience of certainty of experience is always magically right?
What specifically is beyond doubt, if seeing-neurons of your brain are in the state of seeing red, but you are thinking and saying that you see blue?
I know it’s beyond doubt because I am currently experiencing something at this exact moment. Surely you experience things as well and know exactly what I’m talking about. There are no set of words I could use to explain this any better.
You are talking about experience of certainty. I’m asking why do you trust it?
That’s a description of a system, where your experience directly hijacks your feeling of certainty. You wouldn’t say that “I know it’s beyond doubt there is a blue sky, because blue light hits my eyes at this exact moment” is a valid justification for absolute certainty. Even if you feel certain about some part of reality, you can contemplate being wrong, right? Why don’t say “I’m feeling certain, but I understand the possibility of being wrong” the same way you can say about there being blue sky? The possibility is physically possible (I described it). It’s not even phenomenologically unimaginable—it would feel like misremembering.
Why insist on describing your experience as “knowledge”? It’s not like you have perfect evidence for a fact “experience is knowledge”, you just have a feeling of certainty.
And if seeing-neurons of someone’s brain are in the state of seeing red, but they are thinking and saying that they see blue, would you say they are right?
Memories of qualia are uncertain, but current qualia are not.
You’ve seen 15648917, but later you think it was 15643917. You’re wrong, because actually the state of your neurons was of (what you are usually describe as) seeing 15648917. If in the moment of seeing 15648917 (in the moment, when your seeing-neurons are in the state of seeing 15648917) you are thinking that you see 15643917 (meaning your thinking-neurons are in the state of thinking that you see 15643917 ), then you are wrong in same way you may be wrong later. It works the same way the knowledge about everything works.
You can define “being in the state of seeing 15648917” as “knowing you are seeing 15648917″, but there is no reason to do it, you will get unnecessary complications, you can’t use this knowledge, it doesn’t work like knowledge—because it’s not knowing about state, it’s being in a state.
I disagree. Knowing that I’m in pain doesn’t require an additional and separate mental state about this pain that could be wrong. My being in pain is already sufficient for my knowledge of pain, so I can’t be mistaken about being in pain, or about currently having some other qualia.
If a doctor asks a patient whether he is in pain, and the patient says yes, the doctor may question whether the patient is honest. But he doesn’t entertain the hypothesis that the patient is honest but mistaken. We don’t try to convince people who complain about phantom pains that they are actually not in pain after all. More importantly, the patient himself doesn’t try to convince himself that he isn’t in pain, because that would be pointless, even though he strongly wishes it to be true.
I think it’s the opposite: there is no reason to hypothesize that you need a second, additional mental state in order to know that you are in the first mental state.
All knowing involves being in a state anyway, even in other cases where you have knowledge about external facts. Knowing that a supermarket is around the corner requires you to believe that a supermarket is around the corner. This belief is a kind of mental state; though since it is about an external fact, it is itself not sufficient for knowledge. Having such a belief about something (like a supermarket) is not sufficient for its truth, but having an experience of something is.
Well, remember that illusion when you see a rubber hand, then some guy strikes it with a hammer and you recoil, perturbed and confused if you are in pain or not. Or sometimes you can notice that you are in pain, and in fact was in pain for some time, just not paid attention to it, but obvious as you look at your memory now.
I think pain has a kinda “belief state” about it.
I never did the rubber hand test, but recoiling from something doesn’t mean you believe you are in pain. (It doesn’t even necessarily mean you believe you have been injured, as you may recoil just from seeing something gross.) And the confusion from the rubber hand illusion presumably comes from something like a) believing that your hand has been physically injured and b) not feeling any pain. Which is inconsistent information, but it doesn’t show that you can be wrong about being in pain.
About (not) noticing pain: I think attention is a degree of consciousness, and that in this case you did pay some attention, just not complete attention. So you experienced the pain to a degree, and you knew the pain to the same degree. So this isn’t an example of being in pain without knowing it. Nor of falsely believing you’re not in pain.
There is a related case in which you may not be immediately able to verbalize something you know. But that doesn’t mean you didn’t know it, only that not all knowledge is propositional, and that knowing a propositional form isn’t necessary for knowing the what or how.
Feels like yet again the distinction of “starting from where we factor out everything else”. I’m more uncertain about that than most proponents of both camps.
Maybe at some point it makes sense to say that I’m confused about what I feel, what my qualia are? But you have the uhh intention to go deeper and say “oh, you are confused, so you feel confusion instead of whatever you are confused about, case closed”. KInda god of the gaps style.
Also might be that the most cases of feeling things are very apparent, so that weird examples are rare and weird. Most feelings are like 2 + 2 = 4, you would have trouble imagining being confused or wrong about it. When you can readily imagine being confused about 74389 + 37423 = 113812
Then illusionist says “I have no idea what are you talking about, yes, pain and redness I feel them, but what’s so interesting about that, metaphysically? they are just states of my brain, duh, nothing special”. And I think it’s kinda weird that I’m here and feeling things? like, it just feels weird that it’s a thing?
idk.
Nothing in this situation uses certain self-knowledge of moment of experience. Patient can’t communicate it—communication takes time, so it can be spoofed. More importantly, if patient’s knowledge of pain is wrong in the same sense it can be wrong later (that patient says and thinks that they are not in pain, but they actually are and so have perfectly certain knowledge of being in pain, for example), the doctor should treat it the same way as patient misremembering the pain. Because the doctor cares about the state of patient’s brain, not their perfectly certain knowledge. Because calling “being in a state” “knowledge” is epiphenomenal.
Another way to illustrate this, is that you can’t describe your pain with perfect precision, you can’t perfectly tell apart levels of pain. So if you can’t be sure which pain you are feeling, why insist you are sure you are feeling pain instead of pressure? What exactly you are sure about?
And, obviously, the actual reason doctors don’t worry about it in practice, is that it’s unlikely, not because it’s impossible.
What does “external” mean? Can I answer the doctor everything about chemical composition of air if I decide air is a part of me? Can I be wrong about temperature of my brain? About me believing that a supermarket is around the corner?
One reason is that is how every other knowledge works—one thing gains knowledge about other by interacting with it. Another reason—perfectly certain self-knowledge works differently. And we already have contradiction-free way to describe it—“being in a state”. Really, the only reason for calling it perfectly certain knowledge is unjustified intuition.
Another reason is that it’s not really just a hypothesis, when you in fact have parts other than some specific qualia. And these other parts implement knowledge in the way that allows it to be wrong the same way memories can be wrong. So you’ll have potentially wrong knowledge about qualia anyway—defining additional epiphenomenal perfectly certain self-knowledge wouldn’t remove it.
Quite a few people would die to save their children. I actually have never met someone who told me they can’t relate to any thought experiment where they would die for someone else. I would be curious if you can relate to any of the thought experiments in the fake selfishness post. Presumably there are quite a few people who would not sacrifice their entire future for anything, but you can also just sacrifice it partially by taking risks like driving a car.
The main issue I have is that, especially in the case of succession but in general too, I see that situations are often evaluated from some outside viewpoint which continues to be able to experience the situation rather than from the individual itself, which while necessary to stop the theorizing after the third sentence, isn’t what would ‘really happen’ down here in the real world.
In the case of dying to save my children (Do not currently have any or plan to, but for the hypothetical) I would not, though I am struggling to properly articulate my reasoning besides saying “if I’m dead I can’t see my children anyway” which doesn’t feel like a solid enough argument or really align with my thoughts completely.
An example given in the selfishness post is either dying immediately to save the rest of humanity, or living another year than all humanity dies, and in that case I would pick to die, since ultimately the outcome is the same either way (I die) but on the chance the universe continues to exist after I die (I think this is basically certain) the rest of humanity would be fine. And on a more micro-level, living knowing that I and everyone else have one year left to live, and that it’s my fault, sounds utterly agonizing.
Earlier you say:
How are these compatible? You don’t care if all other humans die after you die unless you are responsible?
That’s pretty much it! If everyone in the world was set to die four minutes after I died, and this was just an immutable fact of the universe, then that would be super unfortunate, but oh well, I can’t do anything about it, so I shouldn’t really care that much. In the situation in which I more directly cause/choose, not only have I cut my and everyone else’s lives short to just a year, I also am directly responsible, and could have chosen to just not do that!
It’s more like they have different ontology wholesale compared to you and the thing you are talking about doesn’t correspond to anything in theirs.
Also, p-zombies as defined should not be distinct from other humans in any observable ways, so the words of illusionists have some different cause.
Check out this post
https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness
I don’t have the same dismissal of everyone after my death-boundary. (Also, I’m a theist, so it ain’t the same degree of “boundary” for me.)
But! I do strongly feel, when discussing nonconsensual AI succession of humans, a prerational, vigorous hatred and dismissal for whatever beings succeed is. Do they have profound interiority as worthy moral patients? Are they having fun and loving each other? Don’t care! “Fuck ‘em!” is the phrase that comes to mind. “They killed me and everyone I love—fuck ‘em!” This doesn’t accord with my frontal-cortex moral philosophy, but I have a collection of beliefs that I’ve realized precede any thinking and won’t be altered. Among them: “Art has intrinsic value apart from the experiences it induces in beings’ minds,” “Academic dishonesty is a serious sin and ought to be a high cultural taboo,” and—of anyone who has deeply wronged those I love—for instance, someone who has raped my friend, or an entity, no matter how worthy, that has destroyed someone I love—although I try to cultivate forgiveness in even the most extreme of scenarios—”Fuck ’em!!” This animus extends to would-be handmaidens of the successor.
I don’t expect to integrate any of these instincts.
And as for consensual AI succession of humans—I feel a similarly strong prerational revulsion.
This isn’t meant to be interpreted as a call to violence in the slightest.… But, why haven’t there been more ‘terroristic’ actions towards firms developing AI systems? Why haven’t any datacenters been firebombed or anti-regulation proponents been shot on the street? I mean if this is a “We literally all die if this goes poorly, possibly if it happens at all.” The cost of a few human lives, or jail time/death penalty/torture for the perpetrator seems like a bargain for getting more time/raising significant awareness/etc
It is indeed surprising, because it indicates much more sanity I would otherwise expected.
Terrorism is not effective. The only ultimate result of 9/11 from perspective of bin Laden goals was “Al Qaeda got wiped out of the face of Earth and rival groups have replaced it”. The only result of firebombing datacenter would be “every single personality in AI safety gets branded terrorist, destroying literally any chance to influence relevant policy”.