I think the problem I have with the math example, and it may be that this is extensible to morality, is this:
If I have a certain quantity of apples, or sheep, or whatever, my mind has a tool (a number) ready to identify some characteristic about that quantity (how many it is). But that’s all that number is: a tool. A reference.
Eliezer is right in saying that the teacher’s teaching “2+3=5” doesn’t make it true any more than the teacher’s teaching “2+3=6″ makes it true. But that’s not because two plus three “actually” equals five. It’s because we, as learning animals, have learned definitions of these concepts, and we conceive of them as being fundamental. We think of math as a fundamental part of reality, when it is in fact a low-level, extremely useful, but all-in-the-mind tool used to manipulate our understanding of reality. We’re confusing the map with the territory.
Taking this over to morality:
“Killing is wrong” isn’t true because someone told us it’s true, any more than “Killing is right” would be true if someone were to tell us that. But that’s not because killing another human being “actually” is wrong. It’s because we, as learning animals, have learned (or evolved the low-level emotions that serve as a foundation for this rule) definitions of right and wrong, and we conceive of them as being fundamental. We think of morality as a fundamental part of reality, when it is in fact, an all-in-the mind tool. Should we throw it out because it’s merely evolved? No. It’s useful (at least for the species). But we shouldn’t confuse the map with the territory.
This is still pretty fuzzy in my mind; please criticize, especially if I’ve made some fundamental error.
I’m going to need some help with this one.
It seems to me that the argument goes like this, at first:
There is a huge blob of computation; it is a 1-place function; it is identical to right.
This computation balances various values.
Our minds approximate that computation.
Even this little bit creates a lot of questions. I’ve been following Eliezer’s writings for the past little while, although I may well have missed some key point.
Why is this computation a 1-place function? Eliezer says at first “Here we are treating morality as a 1-place function.” and then jumps to “Since what’s right is a 1-place function...” without justifying that status.
What values does this computation balance? Why those values?
What reason do we have to believe that our minds approximate that computation?
Sorry if these are extremely basic questions that have been answered in other places, or even in this article—I’m trying and having a difficult time with understanding how Eliezer’s argument goes past these issues. Any help would be appreciated.