I basically don’t care about philosophy of mind anymore, mostly because I don’t care about philosophy anymore.
Philosophy, as a project, is usually about two things. One, figure out metaphysics. Two, figure out a correct ontology for reality.
Both of these are flawed projects. Metaphysics is that which we can’t know from experience, so it’s all speculative and also unnecessary, because we can model the world adequately without supposing to know how it works beyond our ability to observe it. Fake metaphysics is helpful contingently because it lets you have fake models that are easier to reason about, but that’s the main use case.
As for finding a correct ontology, we know the map is not the territory, and further there are many possible maps. All models are wrong, some are useful.
I did still care about philosophy a lot right up until “I” switched into PNSE, which happened after several thousands hours of meditation practice, and a lot of other things, too.
Basically what I can say is, the whole idea of philosophy of mind is confused, because it supposes mind to be something separate from reality itself. But the world is only known through mind, and so the world is mind. The appearance of an external world is a useful model for predicting future experiences, and it works well to think and behave as if there is an external reality, because that’s a metaphysical belief that pays substantial rent. But, epistemologically speaking, external reality is not prior to experience, and thus deeper questions about consciousness are mostly confused because they mix up causal dependency in one’s ontology.
I have another question: It seems to me that philosophy of mind is valuable for ethical reasons because it attempts to figure out which things have minds that can experience enjoyment and suffering, which has implications for how we should act. Do you disagree?
I don’t think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you’ve already assumed that philosophy is necessary to understand the world.
It’s enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it’s not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.
I do not think I communicated my point properly. Let me try again:
Showing compassion is not free. It has a cost. To show compassion for someone you might need to take action to help them, or refrain from taking some action that might harm them.
How much effort do you spend on showing compassion for a human being?
How much effort do you spend on showing compassion for an earthworm?
How much effort do you spend on showing compassion for a plant?
How much effort do you spend on showing compassion for an NPC in a video game?
I don’t know about you, but I am willing to expend a decent amount of effort into compassion for a human. Less for an earthworm, but still some, because I suspect the earthworm can experience joy and suffering. Even less for a plant, because I suspect that it probably cannot experience anything. And I put close to zero effort into compassion for an NPC in a video game, because I am fairly convinced that the NPC cannot experience anything (if I show the NPC compassion, it is for my own sake, not theirs).
But I might be wrong. If a philosophical argument could convince me that any of these things experience more or less than I thought they did, I would adjust my priorities accordingly.
I think it’s a mistake in many cases to let philosophy override what you care about. That’s letting S2 do S1′s job.
I’m not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.
I basically don’t care about philosophy of mind anymore, mostly because I don’t care about philosophy anymore.
Philosophy, as a project, is usually about two things. One, figure out metaphysics. Two, figure out a correct ontology for reality.
Both of these are flawed projects. Metaphysics is that which we can’t know from experience, so it’s all speculative and also unnecessary, because we can model the world adequately without supposing to know how it works beyond our ability to observe it. Fake metaphysics is helpful contingently because it lets you have fake models that are easier to reason about, but that’s the main use case.
As for finding a correct ontology, we know the map is not the territory, and further there are many possible maps. All models are wrong, some are useful.
I did still care about philosophy a lot right up until “I” switched into PNSE, which happened after several thousands hours of meditation practice, and a lot of other things, too.
Basically what I can say is, the whole idea of philosophy of mind is confused, because it supposes mind to be something separate from reality itself. But the world is only known through mind, and so the world is mind. The appearance of an external world is a useful model for predicting future experiences, and it works well to think and behave as if there is an external reality, because that’s a metaphysical belief that pays substantial rent. But, epistemologically speaking, external reality is not prior to experience, and thus deeper questions about consciousness are mostly confused because they mix up causal dependency in one’s ontology.
Thanks. What is PNSE? “Persistent non-symbolic experience”?
Yes
I have another question: It seems to me that philosophy of mind is valuable for ethical reasons because it attempts to figure out which things have minds that can experience enjoyment and suffering, which has implications for how we should act. Do you disagree?
I don’t think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you’ve already assumed that philosophy is necessary to understand the world.
It’s enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it’s not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.
I do not think I communicated my point properly. Let me try again:
Showing compassion is not free. It has a cost. To show compassion for someone you might need to take action to help them, or refrain from taking some action that might harm them.
How much effort do you spend on showing compassion for a human being?
How much effort do you spend on showing compassion for an earthworm?
How much effort do you spend on showing compassion for a plant?
How much effort do you spend on showing compassion for an NPC in a video game?
I don’t know about you, but I am willing to expend a decent amount of effort into compassion for a human. Less for an earthworm, but still some, because I suspect the earthworm can experience joy and suffering. Even less for a plant, because I suspect that it probably cannot experience anything. And I put close to zero effort into compassion for an NPC in a video game, because I am fairly convinced that the NPC cannot experience anything (if I show the NPC compassion, it is for my own sake, not theirs).
But I might be wrong. If a philosophical argument could convince me that any of these things experience more or less than I thought they did, I would adjust my priorities accordingly.
I think it’s a mistake in many cases to let philosophy override what you care about. That’s letting S2 do S1′s job.
I’m not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.