Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary
Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?
We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.
If “human experience” includes the experience of an edge detector, I have to ask for a definition of “human experience”. Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?
Kurzban doesn’t directly address the question of whether it’s ever useful to model the mind as made of a few big parts. I presume he would admit they can sometimes be reasonable models to use. He’s mostly focused on showing that those big parts don’t act like very unified agents. That seems consistent with sometimes using simpler, less accurate models.
He certainly didn’t convince me to stop using the concepts of system 1 and system 2. I took his arguments as a reminder that those concepts were half-assed approximations.
He’s saying that it’s extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it’s similarly unobvious whether we should worry about the suffering of edge detectors.
He’s saying that it’s extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it’s similarly unobvious whether we should worry about the suffering of edge detectors.
Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.
I’d like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?
Could the brain be logically divided in N different ways, such that we’d worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they’re composed mostly of the same neurons, we just model them differently?
We talk about edge detectors mostly because they’re simple and “stand-alone” enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven’t isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?
Finally, if very high-level parts of my brain (“I”) have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences (“I can’t decide if that’s an edge or not, help!”), how might a moral theory look that would resolve or trade-off these against each other?
Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?
If “human experience” includes the experience of an edge detector, I have to ask for a definition of “human experience”. Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?
Kurzban doesn’t directly address the question of whether it’s ever useful to model the mind as made of a few big parts. I presume he would admit they can sometimes be reasonable models to use. He’s mostly focused on showing that those big parts don’t act like very unified agents. That seems consistent with sometimes using simpler, less accurate models.
He certainly didn’t convince me to stop using the concepts of system 1 and system 2. I took his arguments as a reminder that those concepts were half-assed approximations.
He’s saying that it’s extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it’s similarly unobvious whether we should worry about the suffering of edge detectors.
Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.
I’d like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?
Could the brain be logically divided in N different ways, such that we’d worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they’re composed mostly of the same neurons, we just model them differently?
We talk about edge detectors mostly because they’re simple and “stand-alone” enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven’t isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?
Finally, if very high-level parts of my brain (“I”) have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences (“I can’t decide if that’s an edge or not, help!”), how might a moral theory look that would resolve or trade-off these against each other?