Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
Besides, I am smarter than my brain, after all.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
This is what I meant when I used the shorthand.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
Point taken, and you’re probably right about the optimization thing. Thanks for taking the time to explain.
You’re welcome! :) Thank you for forcing me to think more precisely about this.