What does a calculator mean by “2”?

I think my previous argument was at least partly wrong or confused, because I don’t really understand what it means for a computation to mean something by a symbol. Here I’ll back up and try to figure out what I mean by “mean” first.

Consider a couple of programs. The first one (A) is an arithmetic calculator. It takes a string as input, interprets it a formula written in decimal notation, and outputs the result of computing that formula. For example, A(“9+12”) produces “21“ as output. The second (B) is a substitution cipher calculator. It “encrypts” its input by substituting each character using a fixed mapping. It so happens that B(“9+12”) outputs “c6b3”.

What do A and B mean by “2”? Intuitively it seems that by “2“, A means the integer (i.e., abstract mathematical object) 2, while for B, “2” doesn’t really mean anything; it’s just a symbol that it blindly manipulates. But A also just produces its output by manipulating symbols, so why does it seem like it means something by “2”? I think it’s because the way A manipulates the symbol “2” corresponds to how the integer 2 “works”, whereas the way B manipuates “2” doesn’t correspond to anything, except how it manipulates that symbol. We could perhaps say that by “2″ B means “the way B manipulates the symbol ‘2’”, but that doesn’t seem to buy us anything.

(Similarly, by “+” A means the mathematical operation of addition, whereas B doesn’t really mean anything by it. Note that this discussion assumes some version of mathematical platonism. A formalist would probably say that A also doesn’t mean anything by “2″ and “+” except how it manipulates those symbols, but that seems implausible to me.)

Going back to meta-ethics, I think a central mystery is what do we mean by “right” when we’re considering moral arguments (by which I don’t mean Nesov’s technical term “moral arguments”, but arguments such as “total utilitarianism is wrong (i.e., not right) because it leads to the following conclusions …, which are obviously wrong”). If human minds are computations (which I think they almost certainly are), then the way that a human mind processes such arguments can be viewed as an algorithm (which may differ from individual to individual). Suppose we could somehow abstract this algorithm away from the rest of the human, and consider it as, say, a program that when given an input string consisting of a list of moral arguments, thinks them over, comes to some conclusions, and outputs those conclusions in the form of a utility function.

If my understanding is correct, what this algorithm means by “right” depends on the details of how it works. Is it more like calculator A or B? It may be that the way we respond to moral arguments doesn’t correspond to anything except how we respond to moral arguments. For example, if it’s totally random, or depend in a chaotic fashion on trivial details of wording or ordering of its input. This would be case B, where “right” can’t really be said to mean anything, at least as far as the part of our minds that considers moral arguments is concerned. Or it may be case A, where the way we process “right” corresponds to some abstract mathematical object or some other kind of external object, in which case I think “right” can be said to mean that external object.

Since we don’t know which is the case yet, I think we’re forced to say that we don’t currently know what “right” means.