Logical Pinpointing

Followup to: Causal Reference, Proofs, Implications and Models

The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2.

-- James R. Newman, The World of Mathematics

Previous meditation 1: If we can only meaningfully talk about parts of the universe that can be pinned down by chains of cause and effect, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that “2 + 2 = 4”isn’t meaningful or true, then what alternate property does the sentence “2 + 2 = 4″ have which makes it so much more useful than the sentence “2 + 2 = 3”?

Previous meditation 2: It has been claimed that logic and mathematics is the study of which conclusions follow from which premises. But when we say that 2 + 2 = 4, are we really just assuming that? It seems like 2 + 2 = 4 was true well before anyone was around to assume it, that two apples equalled two apples before there was anyone to count them, and that we couldn’t make it 5 just by assuming differently.

Speaking conventional English, we’d say the sentence 2 + 2 = 4 is “true”, and anyone who put down “false” instead on a math-test would be marked wrong by the schoolteacher (and not without justice).

But what can make such a belief true, what is the belief about, what is the truth-condition of the belief which can make it true or alternatively false? The sentence ‘2 + 2 = 4’ is true if and only if… what?

In the previous post I asserted that the study of logic is the study of which conclusions follow from which premises; and that although this sort of inevitable implication is sometimes called “true”, it could more specifically be called “valid”, since checking for inevitability seems quite different from comparing a belief to our own universe. And you could claim, accordingly, that “2 + 2 = 4” is ‘valid’ because it is an inevitable implication of the axioms of Peano Arithmetic.

And yet thinking about 2 + 2 = 4 doesn’t really feel that way. Figuring out facts about the natural numbers doesn’t feel like the operation of making up assumptions and then deducing conclusions from them. It feels like the numbers are just out there, and the only point of making up the axioms of Peano Arithmetic was to allow mathematicians to talk about them. The Peano axioms might have been convenient for deducing a set of theorems like 2 + 2 = 4, but really all of those theorems were true about numbers to begin with. Just like “The sky is blue” is true about the sky, regardless of whether it follows from any particular assumptions.

So comparison-to-a-standard does seem to be at work, just as with physical truth… and yet this notion of 2 + 2 = 4 seems different from “stuff that makes stuff happen”. Numbers don’t occupy space or time, they don’t arrive in any order of cause and effect, there are no events in numberland.

Meditation: What are we talking about when we talk about numbers? We can’t navigate to them by following causal connections—so how do we get there from here?

...
...
...

“Well,” says the mathematical logician, “that’s indeed a very important and interesting question—where are the numbers—but first, I have a question for you. What are these ‘numbers’ that you’re talking about? I don’t believe I’ve heard that word before.”

Yes you have.

“No, I haven’t. I’m not a typical mathematical logician; I was just created five minutes ago for the purposes of this conversation. So I genuinely don’t know what numbers are.”

But… you know, 0, 1, 2, 3...

“I don’t recognize that 0 thingy—what is it? I’m not asking you to give an exact definition, I’m just trying to figure out what the heck you’re talking about in the first place.”

Um… okay… look, can I start by asking you to just take on faith that there are these thingies called ‘numbers’ and 0 is one of them?

“Of course! 0 is a number. I’m happy to believe that. Just to check that I understand correctly, that does mean there exists a number, right?”

Um, yes. And then I’ll ask you to believe that we can take the successor of any number. So we can talk about the successor of 0, the successor of the successor of 0, and so on. Now 1 is the successor of 0, 2 is the successor of 1, 3 is the successor of 2, and so on indefinitely, because we can take the successor of any number -

“In other words, the successor of any number is also a number.”

Exactly.

“And in a simple case—I’m just trying to visualize how things might work—we would have 2 equal to 0.”

What? No, why would that be -

“I was visualizing a case where there were two numbers that were the successors of each other, so SS0 = 0. I mean, I could’ve visualized one number that was the successor of itself, but I didn’t want to make things too trivial—”

No! That model you just drew—that’s not a model of the numbers.

“Why not? I mean, what property do the numbers have that this model doesn’t?”

Because, um… zero is not the successor of any number. Your model has a successor link from 1 to 0, and that’s not allowed.

“I see! So we can’t have SS0=0. But we could still have SSS0=S0.”

What? How -

No! Because -

(consults textbook)

- if two numbers have the same successor, they are the same number, that’s why! You can’t have 2 and 0 both having 1 as a successor unless they’re the same number, and if 2 was the same number as 0, then 1′s successor would be 0, and that’s not allowed! Because 0 is not the successor of any number!

“I see. Oh, wow, there’s an awful lot of numbers, then. The first chain goes on forever.”

It sounds like you’re starting to get what I—wait. Hold on. What do you mean, the first chain -

“I mean, you said that there was at least one start of an infinite chain, called 0, but—”

I misspoke. Zero is the only number which is not the successor of any number.

“I see, so any other chains would either have to loop or go on forever in both directions.”

Wha?

“You said that zero is the only number which is not the successor of any number, that the successor of every number is a number, and that if two numbers have the same successor they are the same number. So, following those rules, any successor-chains besides the one that start at 0 have to loop or go on forever in both directions—”

There aren’t supposed to be any chains besides the one that starts at 0! Argh! And now you’re going to ask me how to say that there shouldn’t be any other chains, and I’m not a mathematician so I can’t figure out exactly how to -

“Hold on! Calm down. I’m a mathematician, after all, so I can help you out. Like I said, I’m not trying to torment you here, just understand what you mean. You’re right that it’s not trivial to formalize your statement that there’s only one successor-chain in the model. In fact, you can’t say that at all inside what’s called first-order logic. You have to jump to something called second-order logic that has some remarkably different properties (ha ha!) and make the statement there.”

What the heck is second-order logic?

“It’s the logic of properties! First-order logic lets you quantify over all objects—you can say that all objects are red, or all objects are blue, or ‘x: red(x)→¬blue(x)‘, and so on. Now, that ‘red’ and ‘blue’ we were just talking about—those are properties, functions which, applied to any object, yield either ‘true’ or ‘false’. A property divides all objects into two classes, a class inside the property and a complementary class outside the property. So everything in the universe is either blue or not-blue, red or not-red, and so on. And then second-order logic lets you quantify over properties—instead of looking at particular objects and asking whether they’re blue or red, we can talk about properties in general—quantify over all possible ways of sorting the objects in the universe into classes. We can say, ‘For all properties P’, not just, ‘For all objects X’.”

Okay, but what does that have to do with saying that there’s only one chain of successors?

“To say that there’s only one chain, you have to make the jump to second-order logic, and say that for all properties P, if P being true of a number implies P being true of the successor of that number, and P is true of 0, then P is true of all numbers.”

Um… huh. That does sound reminiscent of something I remember hearing about Peano Arithmetic. But how does that solve the problem with chains of successors?

“Because if you had another separated chain, you could have a property P that was true all along the 0-chain, but false along the separated chain. And then P would be true of 0, true of the successor of any number of which it was true, and not true of all numbers.”

I… huh. That’s pretty neat, actually. You thought of that pretty fast, for somebody who’s never heard of numbers.

“Thank you! I’m an imaginary fictionalized representation of a very fast mathematical reasoner.”

Anyway, the next thing I want to talk about is addition. First, suppose that for every x, x + 0 = x. Next suppose that if x + y = z, then x + Sy = Sz -

“There’s no need for that. We’re done.”

What do you mean, we’re done?

“Every number has a successor. If two numbers have the same successor, they are the same number. There’s a number 0, which is the only number that is not the successor of any other number. And every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers. In combination, those premises narrow down a single model in mathematical space, up to isomorphism. If you show me two models matching these requirements, I can perfectly map the objects and successor relations in them. You can’t add any new object to the model, or subtract an object, without violating the axioms you’ve already given me. It’s a uniquely identified mathematical collection, the objects and their structure completely pinned down. Ergo, there’s no point in adding any more requirements. Any meaningful statement you can make about these ‘numbers’, as you’ve defined them, is already true or already false within that pinpointed model—its truth-value is already semantically implied by the axioms you used to talk about ‘numbers’ as opposed to something else. If the new axiom is already true, adding it won’t change what the previous axioms semantically imply.”

Whoa. But don’t I have to define the + operation before I can talk about it?

“Not in second-order logic, which can quantify over relations as well as properties. You just say: ‘For every relation R that works exactly like addition, the following statement Q is true about that relation.’ It would look like, ‘ relations R: (∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, Sy, z)↔R(Sx, y, z))) → Q)’, where Q says whatever you meant to say about +, using the token R. Oh, sure, it’s more convenient to add + to the language, but that’s a mere convenience—it doesn’t change which facts you can prove. Or to say it outside the system: So long as I know what numbers are, you can just explain to me how to add them; that doesn’t change which mathematical structure we’re already talking about.”

...Gosh. I think I see the idea now. It’s not that ‘axioms’ are mathematicians asking for you to just assume some things about numbers that seem obvious but can’t be proven. Rather, axioms pin down that we’re talking about numbers as opposed to something else.

“Exactly. That’s why the mathematical study of numbers is equivalent to the logical study of which conclusions follow inevitably from the number-axioms. When you formalize logic into syntax, and prove theorems like ‘2 + 2 = 4’ by syntactically deriving new sentences from the axioms, you can safely infer that 2 + 2 = 4 is semantically implied within the mathematical universe that the axioms pin down. And there’s no way to try to ‘just study the numbers without assuming any axioms’, because those axioms are how you can talk about numbers as opposed to something else. You can’t take for granted that just because your mouth makes a sound ‘NUM-burz’, it’s a meaningful sound. The axioms aren’t things you’re arbitrarily making up, or assuming for convenience-of-proof, about some pre-existent thing called numbers. You need axioms to pin down a mathematical universe before you can talk about it in the first place. The axioms are pinning down what the heck this ‘NUM-burz’ sound means in the first place—that your mouth is talking about 0, 1, 2, 3, and so on.”

Could you also talk about unicorns that way?

“I suppose. Unicorns don’t exist in reality—there’s nothing in the world that behaves like that—but they could nonetheless be described using a consistent set of axioms, so that it would be valid if not quite true to say that if a unicorn would be attracted to Bob, then Bob must be a virgin. Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren’t real—since we aren’t locating them within our universe using a causal reference—they’d just be talking about different models, rather than arguing about the properties of a known, fixed mathematical model. The ‘axioms’ aren’t making questionable guesses about some real physical unicorn, or even a mathematical unicorn-model that’s already been pinpointed; they’re just fictional premises that make the word ‘unicorn’ talk about something inside a story.”

But when I put two apples into a bowl, and then put in another two apples, I get four apples back out, regardless of anything I assume or don’t assume. I don’t need any axioms at all to get four apples back out.

“Well, you do need axioms to talk about four, SSSS0, when you say that you got ‘four’ apples back out. That said, indeed your experienced outcome—what your eyes see—doesn’t depend on what axioms you assume. But that’s because the apples are behaving like numbers whether you believe in numbers or not!”

The apples are behaving like numbers? What do you mean? I thought numbers were this ethereal mathematical model that got pinpointed by axioms, not by looking at the real world.

“Whenever a part of reality behaves in a way that conforms to the number-axioms—for example, if putting apples into a bowl obeys rules, like no apple spontaneously appearing or vanishing, which yields the high-level behavior of numbers—then all the mathematical theorems we proved valid in the universe of numbers can be imported back into reality. The conclusion isn’t absolutely certain, because it’s not absolutely certain that nobody will sneak in and steal an apple and change the physical bowl’s behavior so that it doesn’t match the axioms any more. But so long as the premises are true, the conclusions are true; the conclusion can’t fail unless a premise also failed. You get four apples in reality, because those apples behaving numerically isn’t something you assume, it’s something that’s physically true. When two clouds collide and form a bigger cloud, on the other hand, they aren’t behaving like integers, whether you assume they are or not.”

But if the awesome hidden power of mathematical reasoning is to be imported into parts of reality that behave like math, why not reason about apples in the first place instead of these ethereal ‘numbers’?

“Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies. You can store this general fact, and recall the resulting prediction, for many different places inside reality where physical things behave in accordance with the number-axioms. Moreover, so long as we believe that a calculator behaves like numbers, pressing ‘2 + 2’ on a calculator and getting ‘4’ tells us that 2 + 2 = 4 is true of numbers and then to expect four apples in the bowl. It’s not like anything fundamentally different from that is going on when we try to add 2 + 2 inside our own brains—all the information we get about these ‘logical models’ is coming from the observation of physical things that allegedly behave like their axioms, whether it’s our neurally-patterned thought processes, or a calculator, or apples in a bowl.”

I… think I need to consider this for a while.

“Be my guest! Oh, and if you run out of things to think about from what I’ve said already—”

Hold on.

“—try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.”

Are you sure you didn’t just degenerate into talking bloody nonsense?

“Of course it’s bloody nonsense. If I knew a way to think about the question that wasn’t bloody nonsense, I would already know the answer.”

Humans need fantasy to be human.

“Tooth fairies? Hogfathers? Little—”

Yes. As practice. You have to start out learning to believe the little lies.

“So we can believe the big ones?”

Yes. Justice. Mercy. Duty. That sort of thing.

“They’re not the same at all!”

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Susan and Death, in Hogfather by Terry Pratchett

So far we’ve talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical reference by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?

Mainstream status.

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: “Causal Universes

Previous post: “Proofs, Implications, and Models

• “—try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.”

Nothing in the process described, of pinpointing the natural numbers, makes any reference to time. That is why it is temporally stable: not because it has an ongoing existence which is mysteriously unaffected by the passage of time, but because time has no connection with it. Whenever you look at it, it’s the same, identical thing, not a later, miraculously preserved version of the thing.

• What if 2 + 2 varies over something other than time that nonetheless correlates with time in our universe? Suppose 2 + 2 comes out to 4 the first 1 trillion times the operation is performed by humans, and to 5 on the 1 trillion and first time.

I suppose you could raise the same explanation: the definition of 2 + 2 makes no reference to how many times it has been applied. I believe the same can be said for any other reason you may give for why 2 + 2 might cease to equal 4.

• Where that is the case, your method of mapping from the reality to arithmetic is not a good model of that process—no more, no less.

• Seeing as the above response wasn’t very upvoted, I’ll try to explain in simpler terms.
If 2+2 comes out 5 the one-thrillionth-and-first time we compute it, then our calculation does not match numbers.
… which we can tell because?
...and writing this now I realize why the answer was more upvoted, because this is circular reasoning. ’:-s
Sorry, I have no clue.

• I couldn’t agree more. The timelessness of maths should be read negatively, as indepence of anything else, not as dependence on a timeless realm.

• I love the elegance of this answer, upvoting.

• “—try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.”

I have recently had a thought relevant to the topic; an operation that is not stable.

In certain contexts, the operation d is used, where XdY means “take a set of X fair dice, each die having Y sides (numbered 1 to Y), and throw them; add together the numbers on the uppermost faces”. Using this definition, 2d2 has value ‘2’ 25% of the time, value ‘3’ 50% of the time, and value ‘4’ 25% of the time. The procedure is always identical, and so there’s nothing in the process which makes any reference to time, but the result can differ (though note that ‘time’ is still not a parameter in that result). If the operation ‘+’ is replaced by the operation ‘d’ - well, then that is one other way that can be imagined.

Edited to add: It has been pointed out that XdY is a constant probability distribution. The unstable operation to which I refer is the operation of taking a single random integer sample, in a fair manner, from that distribution.

• The random is not in the dice, it is in the throw, and that procedure is never identical. Also, XdY is a distribution, always the same, and the dice are just a relatively fair way of picking a sample.

• Aren’t you just confusing distributions (2d2) and samples (‘3’) here?

• But the question isn’t, “Why don’t they change over time,” but rather, “why are they the same on each occasion”. It makes no reference to occasion? Sure, but even so, why doesn’t 2 + 2 = a random number each time? Why is the same identical thing the same?

• I’m not sure what the etiquette is of responding to retracted comments, but I’ll have a go at this one.

Why is the same identical thing the same?

That’s what I mean when I say they are identical. It’s not another, separate thing, existing on a separate occasion, distinct from the first but standing in the relation of identity to it. In mathematics, you can step into the same river twice. Even aliens in distant galaxies step into the same river.

However, there is something else involved with the stability, which exists in time, and which is capable of being imperfectly stable: oneself. 2+2=4 is immutable, but my judgement that 2+2 equals 4 is mutable, because I change over time. If it seems impossible to become confused about 2+2=4, just think of degenerative brain diseases. Or being asleep and dreaming that 2+2 made 5.

• So the question becomes, “If “2+2” is just another way of saying “4″, what is the point of having two expressions for it?”

My answer: As humans, we often desire to split a group of large, distinct objects into smaller groups of large, distinct objects, or to put two smaller groups of large, distinct, objects, together. So, when we say “2 + 2 = 4”, what we are really expressing is that a group of 4 objects can be transformed into a group of 2 objects and another group of 2 objects, by moving the objects apart (and vice versa). Sharing resources with fellow humans is fundamental to human interaction. The reason I say, “large, distinct objects” is that the rules of addition do not hold for everything. For example, when you add “1” particle of matter to “1″ particle of antimatter, you get “0” particles of both matter and antimatter.

Numbers, and, yes, even logic, only exist fundamentally in the mind. They are good descriptions that correspond to reality. The soundness theorem for logic (which is not provable in the same logic it is describing) is what really begins to hint at logic’s correspondence to the real world. The soundness theorem relies on the fact that all of the axioms are true and that inference rules are truth-preserving. The Peano axioms and logic are useful because, given the commonly known meaning we assign to the symbols of those systems, the axioms do properly describe our observations of reality and the inference rules do lead to conclusions that continue to correspond to our observations of reality (in (one of) the correct domain(s), groups of large, distinct, objects). We observe that quantity is preserved regardless of grouping; this is the associative property (here’s another way of looking at it).

The mathematical proof of the soundness theorem is useless for convincing the hard skeptic, because it uses mathematical induction itself! The principle of mathematical induction is called such because it was formulated inductively. When it comes to the large numbers, no one has observed these quantities. But, for all quantities we have observed so far, mathematical induction has held. We use deduction to apply induction, but that doesn’t make the induction any less inductive to begin with. We use the real number system to make predictions in physics. If we have the luxury of making an observation, we should go ahead and update. For companies with limited resources that are trying to develop a useful product to sell to make money, and even more so for Friendly AI (a mistake could end human civilization), it’s nice to have a good idea of what an outcome will be before it happens. Bayes’ rule provides a systematic way of working with this uncertainty. Maybe, one day, when I put two apples next to two apples on my kitchen table, there will be five (the order in which I move the apples around will affect their quantity), but, if I had to bet one way or the other, I assure you that my money is on this not happening.

• The presentation of the natural numbers is meant to be standard, including the (well-known and proven) idea that it requires second-order logic to pin them down. There’s some further controversy about second-order logic which will be discussed in a later post.

I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.

On the other hand, I’ve surely never seen a general account of meaningfulness which puts logical pinpointing alongside causal link-tracing to delineate two different kinds of correspondence within correspondence theories of truth. To whatever extent any of this is a standard position, it’s not nearly widely-known enough or explicitly taught in those terms to general mathematicians outside model theory and mathematical logic, just like the standard position on “proof”. Nor does any of it appear in the S. E. P. entry on meaning.

• Very nice post!

Bug: Higher-order logic (a standard term) means “infinite-order logic” (not a standard term), not “logic of order greater 1″ (also not a standard term). (For whatever reason, neither the Wikipedia nor the SEP entry seem to come out and say this, but every reference I can remember used the terms like that, and the usage in SEP seems to imply it too, e.g. “This second-order expressibility of the power-set operation permits the simulation of higher-order logic within second order.”)

• A few points:

i) you don’t actually need to jump directly to second order logic in to get a categorical axiomatization of the natural numbers. There are several weaker ways to do the job: L_omega_omega (which allows infinitary conjunctions), adding a primitive finiteness operator, adding a primitive ancestral operator, allowing the omega rule (i.e. from the infinitely many premises P(0), P(1), … P(n), … infer AnP(n)). Second order logic is more powerful than these in that it gives a quasi categorical axiomatization of the universe of sets (i.e. of any two models of ZFC_2, they are either isomorphic or one is isomorphic to an initial segment of the other).

ii) although there is a minority view to the contrary, it’s typically thought that going second order doesn’t help with determinateness worries (i.e. roughly what you are talking about with regard to “pinning down” the natural numbers). The point here is that going second order only works if you interpret the second order quantifiers “fully”, i.e. as ranging over the whole power set of the domain rather than some proper subset of it. But the problem is: how can we rule out non-full interpretations of the quantifiers? This seems like just the same sort of problem as ruling out non-standard models of arithmetic (“the same sort”, not the same, because for the reasons mentioned in (i) it is actually more stringent of a condition.) The point is if you for some reason doubt that we have a categorical grasp of the natural numbers, you are certainly not going to grant that we can enforce a full interpretation of the second order quantifiers. And although it seems intuitively obvious that we have a categorical grasp of the natural numbers, careful consideration of the first incompleteness theorem shows that this is by no means clear.

iii) Given that categoricity results are only up to isomorphism, I don’t see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.

• iii) Given that categoricity results are only up to isomorphism, I don’t see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.

Generally, things being identical up to isomorphism is considered to make them the same thing in all senses that matter. If something has all the same properties as the natural numbers, in every respect and every particular, then that’s no different from merely changing the names. This is a pretty basic mathematical concept, and that you aren’t familiar with it makes me question the rest of this comment as well.

• I think philosophers who think that the categoricity of second-order Peano arithmetic allows us to refer to the natural numbers uniquely tend to also reject the causal theory of reference, precisely because the causal theory of reference is usually put as requiring all reference to be causally guided. Among those, lots of people more-or-less think that references can be fixed by some kinds of description, and I think logical descriptions of this kind would be pretty uncontroversial.

OTOH, for some reason everyone in philosophy of maths is allergic to second-order logic (blame Quine), so the categoricity argument doesn’t always hold water. For some discussion, there’s a section in the SEP entry on Philosophy of Mathematics.

(To give one of the reasons why people don’t like SOL: to interpret it fully you seem to need set theory. Properties basically behave like sets, and so you can make SOL statements that are valid iff the Continuum Hypothesis is true, for example. It seems wrong that logic should depend on set theory in this way.)

• This is a facepalm “Duh” moment, I hear this criticism all the time but it does not mean that “logic” depends on “set theory”. There is a confusion here between what can be STATED and what can be KNOWN. The criticism only has any force if you think that all “logical truths” ought to be recognizable so that they can be effectively enumerated. But the critics don’t mind that for any effective enumeration of theorems of arithmetic, there are true statements about integers that won’t be included—we can’t KNOW all the true facts about integers, so the criticism of second-order logic boils down to saying that you don’t like using the word “logic” to be applied to any system powerful enough to EXPRESS quantified statements about the integers, but only to systems weak enough that all their consequences can be enumerated.

This demand is unreasonable. Even if logic is only about “correct reasoning”, the usual framework given by SOL does not presume any dubious principles of reasoning and ZF proves its consistency. The existence of propositions which are not deductively settled by that framework but which can be given mathematical interpretations means nothing more than that our repertoire of “techniques of correct reasoning”, which has grown over the centuries, isn’t necessarily finalized.

• “Because otherwise you can’t talk about numbers as opposed to something else,”

The Abstract Algebra course I took presented it in this fashion. I have a hard time seeing how you could even have abstract algebra without this notion.

• What about Steven Landsburg’s frequent crowing on the Platonicity of math and how numbers are real because we can “directly perceive them”? How does this relate to it?

EDIT: Well, he replies here.

• I was wondering what he thought about this!

While I greatly sympathize with the “Platonicity of math”, I can’t shake the idea that my reasoning about numbers isn’t any kind of direct perception, but just reasoning about an in-memory representation of a model that is ultimately based on all the other systems that behave like numbers.

I find the arguments about how not all true statements regarding the natural numbers can be inferred via first-order logic tedious. It doesn’t seem like our understanding of the natural numbers is particularly impoverished because of it.

• so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way

I remember explaining the Axiom of Choice in this way to a fellow undergraduate on my integration theory course in late 2000. But of course it never occurred to me to write it down, so you only have my word for this :-)

• This post definitely deserves a lot of credit.

• I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.

If memory serves, Hofstadter uses roughly this explanation in GEB.

• This is pretty close to how I remember the discussion in GEB. He has a good discussion of non-Euclidean geometry. He emphasizes that originally the negation of Parallel Postulate was viewed as absurd, but that now we can understand that the non-Euclidean axioms are perfectly reasonable statements which describe something other than plane geometry we are used to. Later he has a bit of a discussion of what a model of PA + NOT(CON(PA)) would look like. I remember finding it pretty confusing, and I didn’t really know what he was getting at until I red some actual logic theory textbooks. But he did get across the idea that the axioms would still describe something, but that something would be larger and stranger than the integers we think we know.

• ???

IRC, Hofstadter is a firm formalist, and I don’t see how that square with EYs apparent Correspondence Theory. At least i don’t see the point in correspondence if hat is being corresponded to is itself generated by axioms.

• Thanks for posting this. My intended comments got pretty long, so I converted them to a blog post here. The gist is that I don’t think you’ve solved the problem, partly because second order logic is not logic (as explained in my post) and partly because you are relying on a theorem (that second order Peano arithmetic has a unique model) which relies on set theory, so you have “solved” the problem of what it means for numbers to be “out there” only by reducing it to the question of what it means for sets to be “out there”, which is, if anything, a greater mystery.

• So this is where (one of the inspirations for) Eliezer’s meta-ethics comes from! :)

A quick refresher from a former comment:

Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like “should”, thus creating the illusion of disagreement.

… and now from this post:

Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren’t real—since we aren’t locating them within our universe using a causal reference—they’d just be talking about different models, rather than arguing about the properties of a known, fixed mathematical model.

(This little realization also holds a key to resolving the last meditation, I suppose.)

I’ve heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer’s coming from.

• I’ve heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer’s coming from.

Agreed, and disappointed that this comment was downvoted.

• This is a really good post.

If I can bother your mathematical logician for just a moment...

Hey, are you conscious in the sense of being aware of your own awareness?

Also, now that Eliezer can’t ethically deinstantiate you, I’ve got a few more questions =)

You’ve given a not-isomorphic-to-numbers model for all the prefixes of the axioms. That said, I’m still not clear on why we need the second-to-last axiom (“Zero is the only number which is not the successor of any number.”) -- once you’ve got the final axiom (recursion), I can’t seem to visualize any not-isomorphic-to-numbers models.

Also, how does one go about proving that a particular set of axioms has all its models isomorphic? The fact that I can’t think of any alternatives is (obviously, given the above) not quite sufficient.

Oh, and I remember this story somebody on LW told, there were these numbers people talked about called...um, I’m just gonna call them mimsy numbers, and one day this mathematician comes to a seminar on mimsy numbers and presents a proof that all mimsy numbers have the Jaberwock property, and all the mathematicians nod and declare it a very fine finding, and then the next week, he comes back, and presents a proof that no mimsy numbers have the Jaberwock property, and then everyone suddenly loses interest in mimsy numbers...

Point being, nothing here definitely justifies thinking that there are numbers, because someone could come along tomorrow and prove ~(2+2=4) and we’d be done talking about “numbers”. But I feel really really confident that that won’t ever happen and I’m not quite sure how to say whence this confidence. I think this might be similar to your last question, but it seems to dodge RichardKennaway’s objection.

• I’m still not clear on why we need the second-to-last axiom (“Zero is the only number which is not the successor of any number.”)

I guess it is not necessary. It was just an illustration of a “quick fix”, which was later shown to be insufficient.

• You just say: ‘For every relation R that works exactly like addition, the following statement S is true about that relation.’ It would look like, ‘∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz))) → S)’, where S says whatever you meant to say about +, using the token R.

The expression ‘(∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz)))’ is true for addition, but also for many other relations, such as a ‘∀x∀y∀z: R(x, y, z)’ relation.

• I’m not sure that adding the conjunction (R(x,y,z)&R(x,y,w)->z=w) would have made things clearer...I thought it was obvious the hypothetical mathematician was just explaining what kind of steps you need to “taboo addition”

• Yes, the educational goal of that paragraph is to “taboo addition”. Nonetheless, the tabooing should be done correctly. If it is too difficult to do, then it is Eliezer’s problem for choosing a difficult example to illustrate a concept.

This may sound like nitpicking, but this website has a goal is to teach people rationality skills, as opposed to “guessing the teacher’s password”. The article spends five screens explaining why details are so important when defining the concept of a “number”, and the reader is supposed to understand it. So it’s unfortunate if that explanation is followed by another example, which accidentally gets the similar details wrong. My objections against the wrong formula are very similar to the in-story mathematician’s objections to the definitions of “number”; the definition is too wide.

Your suggestion: ‘∀x∀y∀z∀w: R(x, 0, x) ∧ (R(x, y, z)↔R(x, Sy, Sz)) ∧ ((R(x, y, z)∧R(x, y, w))→z=w)’

My alternative: ‘∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, y, z)↔R(x, Sy, Sz)) ∧ (R(x, y, z)↔R(Sx, y, Sz))’.

Both seem correct, and anyone knows a shorter (or a more legible) way to express it, please contribute.

• Shorter (but not necessarily more legible): ∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, Sy, z)↔R(Sx, y, z)).

• Done!

• Perfect!

• Both seem correct, and anyone knows a shorter (or a more legible) way to express it, please contribute.

The version in the article now, ∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)↔R(x, Sy, Sz)), is better than before, but it leaves open the possibility that R(0,0,7) as well as R(0,0,0). One more possibility is:

“Not in second-order logic, which can quantify over functions as well as properties. (...) It would look like, ‘∀ functions f: ((∀x∀y: f(x, 0) = x ∧ f(x, Sy) = Sf(x, y)) → Q)’ (...)”

(I guess I’m not entirely in favor of this version—ETA: compared to Kindly’s fix—because quantifying over relations surely seems like a smaller step from quantifying over properties than does quantifying over functions, if you’re new to this, but still thought it might be worth pointing out in a comment.)

• Your idea of pinning down the natural numbers using second order logic is interesting, but I don’t think that it really solves the problem. In particular, it shouldn’t be enough to convince a formalist that the two of you are talking about the same natural numbers.

Even in second order PA, there will still be statements that are independent of the axioms, like “there doesn’t exist a number corresponding to a Godel encoding of a proof that 0=S0 under the axioms of second order PA”. Thus unless you are assuming full semantics (i.e. that for any collection of numbers there is a corresponding property), there should be distinct models of second order PA for which the veracity of the above statement differs.

Thus it seems to me that all you have done with your appeal to second order logic is to change my questions about “what is a number?” into questions about “what is a property?” In any case, I’m still not totally convinced that it is possible to pin down The Natural Numbers exactly.

• I’m assuming full semantics for second-order logic (for any collection of numbers there is a corresponding property being quantified over) so the axioms have a semantic model provably unique up to isomorphism, there are no nonstandard models, the Completeness Theorem does not hold and some truths (like Godel’s G) are semantically entailed without being syntactically entailed, etc.

• OK then. As soon as you can explain to me exactly what you mean when you say “for any collection of numbers there is a corresponding property being quantified over”, I will be satisfied. In particular, what do you mean when you say “any collection”?

• If you’re already fine with the alternating quantifiers of first-order logic, I don’t see why allowing branching quantifiers would cause a problem. I could describe second order logic in terms of branching quantifiers.

• Huh. That’s interesting. Are you saying that you can actually pin down The Natural Numbers exactly using some “first order logic with branching quantifiers”? If so, I would be interested in seeing it.

• Sure:

It is not the case that: there exists a z such that for every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’,z) is true

where Q(x,x’,y,y’,z) is ((x=x’ ) → (y=y’ )) ∧ ((Sx=x’ ) → (y=y’ )) ∧ ((x=0) → (y=0)) ∧ ((x=z) → (y=1))

• Cool. I agree that this is potentially less problematic than the second order logic approach. But it does still manage to encode the idea of a function in it implicitly when it talks about “y depending only on x”, it essentially requires that y is a function of x, and if it’s unclear exactly which functions are allowed, you will have problems. I guess first order logic has this problem to some degree, but with alternating quantifiers, the functions that you might need to define seem closer to the type that should necessarily exist.

• Are you claiming that this term is ambiguous? In what specially favored set theory, in what specially favored collection of allowed models, is it ambiguous? Maybe the model of set theory I use has only one set of allowable ‘collections of numbers’ in which case the term isn’t ambiguous. Now you could claim that other possible models exist, I’d just like to know in what mathematical language you’re claiming these other models exist. How do you assert the ambiguity of second-order logic without using second-order logic to frame the surrounding set theory in which it is ambiguous?

• I’m not entirely sure what you’re getting at here. If we start restricting properties to only cut out sets of numbers rather than arbitrary collections, then we’ve already given up on full semantics.

If we take this leap, then it is a theorem of set theory that all set-theoretic models of the of the natural numbers are isomorphic. On the other hand, since not all statements about the integers can be either proven or disproven with the axioms of set theory, there must be different models of set theory which have different models of the integers within them (in fact, I can build these two models within a larger set theory).

On the other hand, if we continue to use full semantics, I’m not sure how you clarify to be what you mean when you say “a property exists for every collection of numbers”. Telling me that I should already know what a collection is doesn’t seem much more reasonable than telling me that I should already know what a natural number is.

• On the other hand, since not all statements about the integers can be either proven or disproven with the axioms of set theory, there must be different models of set theory which have different models of the integers within them

Doesn’t the proof of the Completeness Theorem /​ Compactness Theorem incidentally invoke second-order logic itself? (In the very quiet way that e.g. any assumption that the standard integers even exist invokes second-order logic.) I’m not sure but I would expect it to, since otherwise the notion of a “consistent” theory is entirely dependent on which models your set theory says exist and which proofs your integer theory says exist. Perhaps my favorite model of set theory has only one model of set theory, so I think that only one model exists. Can you prove to me that there are other models without invoking second-order logic implicitly or explicitly in any called-on lemma? Keep in mind that all mathematicians speak second-order logic as English, so checking that all proofs are first-order doesn’t seem easy.

• I am admittedly in a little out of my depth here, so the following could reasonably be wrong, but I believe that the Compactness Theorem can be proved within first order set theory. Given a consistent theory, I can use the axiom of choice to extend it to a maximal consistent set of statements (i.e. so that for every P either P or (not P) is in my set). Then for every statement that I have of the form “there exists x such that P(x)”, I introduce an element x to my model and add P(x) to my list of true statements. I then re-extend to a maximal set of statements, and add new variables as necessary, until I cannot do this any longer. What I am left with is a model for my theory. I don’t think I invoked second order logic anywhere here. In particular, what I did amounts to a construction within set theory. I suppose it is the case that some set theories will have no models of set theory (because they prove that set theory is inconsistent), while others will contain infinitely many.

My intuition on the matter is that if you can state what you are trying to say without second order logic, you should be able to prove it without second order logic. You need second order logic to even make sense of the idea of the standard natural numbers. The Compactness Theorem can be stated in first order set theory, so I expect the proof to be formalizable within first order set theory.

• I’m not entirely sure what you’re getting at here. If we start restricting properties to only cut out sets of numbers rather than arbitrary collections, then we’ve already given up on full semantics.

If we take this leap, then it is a theorem of set theory that all set-theoretic models of the of the natural numbers are isomorphic. On the other hand, since not all statements about the integers can be either proven or disproven with the axioms of set theory, there must be different models of set theory which have different models of the integers within them (in fact if you give me an inaccessible cardinal, I build these two models within a larger set theory).

On the other hand, if we continue to use full semantics, I’m not sure how you clarify to be what you mean when you say “a property exists for every collection of numbers”. Telling me that I should already know what a collection is doesn’t seem much more reasonable than telling me that I should already know what a natural number is.

• I think this is his way of connecting numbers to the previous posts. If “a property” is defined as a causal relation, which all properties are, then I think this makes sense. It doesn’t provide some sort of ultimate metaphysical justification for numbers or properties or anything, but it clarifies connections between the two and such a justification isn’t really possible anyways.

• I don’t think that I understand what you mean here.

How can these properties represent causal relations? They are things that are satisfied by some numbers and not by others. Since numbers are aphysical, how do we relate this to causal relations.

On the other hand, even with a satisfactory answer to the above question, how do we know that “being in the first chain” is actually a property, since otherwise we still can’t show that there is only one chain.

• Since numbers are aphysical, how do we relate this to causal relations?

You just begged the question. Eliezer answered you in the OP:

Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies. You can store this general fact, and recall the resulting prediction, for many different places inside reality where physical things behave in accordance with the number-axioms. Moreover, so long as we believe that a calculator behaves like numbers, pressing ‘2 + 2’ on a calculator and getting ‘4’ tells us that 2 + 2 = 4 is true of numbers and then to expect four apples in the bowl. It’s not like anything fundamentally different from that is going on when we try to add 2 + 2 inside our own brains—all the information we get about these ‘logical models’ is coming from the observation of physical things that allegedly behave like their axioms, whether it’s our neurally-patterned thought processes, or a calculator, or apples in a bowl.

• I can’t think of an example, but I’m thinking that if a property existed then it would be a causal relation. A property wouldn’t represent a causal relation, it would be one. I wasn’t thinking mathematically but instead in terms of a more commonplace understanding of properties as things like red and yellow and blue.

The argument made by the simple idea of truth might be a way to get us from physical states (which are causal relations) to numbers. If you believe that counting sheep is a valid operation, then quantifying color also seems fine. The reason I spoke in terms of causal relations is because I believe understanding qualities as causal relations between things allows us to deduce properties about things through a combination of Salmonoff Induction and the method described in this post.

Are you questioning the idea that numbers or properties are a quality about objects? If so, what are they?

I’m feeling confused though. If the definition of property used here doesn’t connect to or means something completely different than facts about objects, then I’m way off base. I might also be off base for other reasons. Not sure.

• I am questioning the idea that numbers (at least the things that this post refers to as numbers) are a quality about objects. Numbers, as they are described here, are an abstract logical construction.

• How come we never see anything physical that behaves like any of of the non-standard models of first order PA? Given that’s the case, it seems like we can communicate the idea of numbers to other humans or even aliens by saying “the only model of first order PA that ever shows up in reality”, so we don’t need second order logic (or the other logical ideas mentioned in the comments) just to talk about the natural numbers?

• The natural numbers are supposed to be what you get if you start counting from 0. If you start counting from 0 in a nonstandard model of PA you can’t get to any of the nonstandard bits, but first-order logic just isn’t expressive enough to allow you to talk about “the set of all things that I get if I start counting from 0.” This is what allows nonstandard models to exist, but they exist only in a somewhat delicate mathematical sense and there’s no reason that you should expect any physical phenomenon corresponding to them.

If I wanted to communicate the idea of numbers to aliens, I don’t think I would even talk about logic. I would just start counting with whatever was available, e.g. if I had two rocks to smash together I’d smash the rocks together once, then twice, etc. If the aliens don’t get it by the time I’ve smashed the rocks together, say, ten times, then they’re either so bad at induction or so unfamiliar with counting that we probably can’t meaningfully communicate with them anyway.

• The Pirahã are unfamiliar with counting and we still can kind-of meaningfully communicate with them. I agree with the rest of the comment, though.

• I was ready to reply “bullshit”, but I guess if their language doesn’t have any cardinal or ordinal number terms …

Still, they could count with beads or rocks, à la the magic sheep-counting bucket.

It’s understandable why they wouldn’t really need counting given their lifestyle. But I wonder what they do (or did) when a neighboring tribe attacks or encroaches on their territory? Their language apparently does have words for ‘small amount’ and ‘large amount’, but how would they decide how many warriors to send to meet an opposing band?

• Still, they could count with beads or rocks, à la the magic sheep-counting bucket.

Here’s a decent argument that they probably don’t have words for numbers because they don’t count, rather than the other way round, contra pop-Whorfianism. (Otherwise I guess they’d just borrow the words for numbers from Portuguese or something, as they probably did with personal pronouns from Tupi.)

• This is what allows nonstandard models to exist, but they exist only in a somewhat delicate mathematical sense and there’s no reason that you should expect any physical phenomenon corresponding to them.

Is it just coincidence that these nonstandard models don’t show up anywhere in the empirical sciences, but real numbers and complex numbers do? I’m wondering if there is some sort of deeper reason… Maybe you were hinting at something by “delicate”?

If I wanted to communicate the idea of numbers to aliens, I don’t think I would even talk about logic.

Good point. I guess I was trying to make the point that Eliezer seems a bit obsessed with logical pinpointing (aka categoricity) in this post. (“You need axioms to pin down a mathematical universe before you can talk about it in the first place.”) Before we achieved categoricity, we already knew what mathematical structure we wanted to talk about, and afterwards, it’s still useful to add more axioms if we want to prove more theorems.

• Is it just coincidence that these nonstandard models don’t show up anywhere in the empirical sciences, but real numbers and complex numbers do?

The process by which the concepts “natural /​ real /​ complex numbers” vs. “nonstandard models of PA” were generated is very different. In the first case, mathematicians were trying to model various aspects of the world around them (e.g. counting and physics). In the second case, mathematicians were trying to pinpoint something else they already understood and ended up not quite getting it because of logical subtleties.

I’m not sure how to explain what I mean by “delicate.” It roughly means “unlikely to have been independently invented by alien mathematicians.” In order for alien mathematicians to independently invent the notion of a nonstandard model of PA, they would have to have independently decided that writing down the first-order Peano axioms is a good idea, and I just don’t find this all that likely. On the other hand, there are various routes alien mathematicians might take towards independently inventing the complex numbers, such as figuring out quantum mechanics.

Before we achieved categoricity, we already knew what mathematical structure we wanted to talk about, and afterwards, it’s still useful to add more axioms if we want to prove more theorems.

I guess Eliezer’s intended response here is something like “but when you want to explain to an AI what you mean by the natural numbers, you can’t just say The Things You Use To Count With, You Know, Those.”

• How come we never see anything physical that behaves like any of of the non-standard models of first order PA?

Umm… wouldn’t they be considered “standard” in this case? I.e. matching some real-world experience?

Let’s imagine a counterfactual world in which some of our “standard” models appear non-standard. For example, in a purely discrete world (like the one consisting solely of causal chains, as EY once suggested), continuity would be a non-standard object invented by mathematicians. What makes continuity “standard” in our world is, disappointingly, our limited visual acuity.

Another example: in a world simulated on a 32-bit integer machine, natural numbers would be considered non-standard, given how all actual numbers wrap around after 2^32-1.

Exercise for the reader: imagine a world where a certain non-standard model of first order PA would be viewed as standard.

• This is basically the theme of the next post in the sequence. :)

• How come we never see anything physical that behaves like any of of the non-standard models of first order PA?

Qiaochu’s answer: because PA isn’t unique. There are other (stronger/​weaker) axiomatizations of natural numbers that would lead to other nonstandard models. I don’t think that answer works, because we don’t see nonstandard models of these other theories either.

wedrifid’s answer: because PA was designed to talk about natural numbers, not other things in reality that humans can tell apart from natural numbers.

My answer: because PA was designed to talk about natural numbers, and we provably did a good job. PA has many models, but only one computable model. Since reality seems to be computable, we don’t expect to see nonstandard models of PA in reality. (Though that leaves the mystery of whether/​why reality is computable.)

• First post in this sequence that lives up to the standard of the old classics. Love it.

• Yeah, but I’ve found the previous posts much more useful for coming up with clear explanations aimed at non-LWers, and I presume they’d make a better introduction to some of the core LW epistemic rationality than just throwing “The Simple Truth” at them.

• It’s a pretty hard balance to strike that’s probably different for everyone, between incomprehensibility and boringness.

• I already more-or-less knew most of the stuff in the previous posts in this sequences and still didn’t find them boring.

• Agree. When I first read The Simple Truth, I thought Eliezer was endorsing pragmatism over correspondence.

• In my opinion, Causal Diagrams and Causal Models is far superior to Timeless Causality.

I am not saying that there is anything wrong with “Timeless Causality”, or any of Eliezer’s old posts, but this sequence goes into enough depth of explanation that even someone who has not read the older sequences on Less Wrong would have a good chance of understanding it.

• You just say: ‘For every relation R that works exactly like addition, the following statement S is true about that relation.’ It would look like, ‘∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz))) → S)’, where S says whatever you meant to say about +, using the token R.

I would change the statement to be something other than ‘S’, say ‘Q’, as S is already used for ‘successor’.

• Requesting feedback:

“Whenever a part of reality behaves in a way that conforms to the number-axioms—for example, if putting apples into a bowl obeys rules, like no apple spontaneously appearing or vanishing, which yields the high-level behavior of numbers—then all the mathematical theorems we proved valid in the universe of numbers can be imported back into reality. The conclusion isn’t absolutely certain, because it’s not absolutely certain that nobody will sneak in and steal an apple and change the physical bowl’s behavior so that it doesn’t match the axioms any more. But so long as the premises are true, the conclusions are true; the conclusion can’t fail unless a premise also failed. You get four apples in reality, because those apples behaving numerically isn’t something you assume, it’s something that’s physically true. When two clouds collide and form a bigger cloud, on the other hand, they aren’t behaving like integers, whether you assume they are or not.”

This is exactly what I argued and grounded back in this article.

Specifically, that the two premises:

1) rocks behave isomorphically to numbers, and
2) under the axioms of numbers, 2+2 = 4

jointly imply that adding two rocks to two rocks gets four rocks. (See the cute diagram.)

And yet the response on that article (which had an array of other implications and reconciliations) was pretty negative. What gives?

Furthermore, in discussions about this in person, Eliezer_Yudkowsky has (IIRC and I’m pretty sure I do) invoked the “hey, adding two apples to two apples gets four apples” argument to justify the truth of 2+2=4, in direct contradiction of the above point. What gives on that?

• Terry Tao’s 2007 post on nonfirstorderizability and branching quantifiers gives an interesting view of the boundary between first- and second-order logic. Key quote:

Moving on to a more complicated example, if Q(x,x’,y,y’) is a quaternary relation on four objects x,x’,y,y’, then we can express the statement

For every x and x’, there exists a y depending only on x and a y’ depending on x and x’ such that Q(x,x’,y,y’) is true

...but it seems that one cannot express

For every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’) is true

in first order logic!

The post and comments give some well-known theorems that turn out to rely on such “branching quantifiers”, and an encoding of the predicate “there are infinitely many X” which cannot be done in first-order logic.

• I’m a little confused as to which of two positions this is advocating:

1. Numbers are real, serious things, but the way that we pick them out is by having a categorical set of axioms. They’re interesting to talk about because lots of things in the world behave like them (to some degree).

2. Mathematical talk is actually talk about what follows from certain axioms. This is interesting to talk about because lots of things obey the axioms and so exhibit the theorems (to some degree).

Both of these have some problems. The first one requires you to have weird, non-physical numbery-things. Not only this, but they’re a special exception to the theory of reference that’s been developed so far, in that you can refer to them without having a causal connection.

The second one (which is similar to what I myself would espouse) doesn’t have this problem, because it’s just talking about what follows logically from other stuff, but you do then have to explain why we seem to be talking about numbers. And also what people were doing talking about arithmetic before they knew about the Peano axioms. But the real bugbear here is that you then can’t really explain logic as part of mathematics. The usual analyisis of logic that we do in maths with the domain, interpretation, etc. can’t be the whole deal if we’re cashing out the mathematics in terms of logical implication! You’ve got to say something else about logic.

(I think the answer is, loosely, that

1. the “numbers” we talk about are mainly fictional aides to using the system, and

2. the situation of pre-axiom speakers is much like that of English speakers who nonetheless can’t explain English grammar.

3. I have no idea what to say about logic! )

I’m curious which of these (or neither) is the correct interpretation of the post, and if it’s one of them, what Eliezer’s answers are… but perhaps they’re coming in another post.

• I’m not sure exactly what Eliezer intends, but I’ll put in my two cents:

A proof is simply a game of symbol manipulation. You start with some symbols, say ‘(’, ‘)’, ‘¬’, ‘→’, ‘↔’, ‘∀’, ‘∃’, ‘P’, ‘Q’, ‘R’, ‘x’, ‘y’, and ‘z’. Call these symbols the alphabet. Some sequences of symbols are called well-formed formulas, or wffs for short. There are rules to tell what sequences of symbols are wffs, these are called a grammar. Some wffs are called axioms. There is another important symbol that is not one of the symbols you chose—this is the ‘⊢’ symbol. A declaration is the ‘⊢’ symbol followed by a wff. A legal declaration is either the ‘⊢’ symbol followed by an axiom or the result of an inference rule. An inference rule is a rule that declares that a declaration of a certain form is legal, given that certain declarations of other forms are legal. A famous inference rule called modus ponens is part of a formal system called first-order logic. This rule says: “If ‘⊢ P’ and ‘⊢ (P → Q)’ (where P and Q are replaced with some wffs) are valid declarations, then ‘⊢ Q’ is also a valid declaration.” By the way, a formal system is just a specific alphabet, grammar, set of axioms, and set of inference rules. You also might like to note that if ‘⊢ P’ (where P is replaced with some wff) is a valid declaration, then we also call P a theorem. So now we know something: In a formal system, all axioms are theorems.

The second thing to note is that a formal system does not necessarily have anything to do with even propositional logic (let alone first- or second-order logic!). Consider the MIU system (open link in WordPad, on Windows), for example. It has four inference rules for just messing around with the order of the letters, ‘M’, ‘I’, and ‘U’! That doesn’t have to do with the real world or even math, does it?

The third thing to note is that, though a formal system can tell us what wffs are theorems, it cannot (directly) tell us what wffs are not theorems. And hence we have the MU puzzle. This asks whether “MU” is a theorem in the MIU system. If it is, then you only need the MIU system to demonstrate this, but if it is not, you need to use reasoning from outside of that system.

As other commenters have already noted, mathematicians are not thinking about ZFC set theory when they prove things (that’s not a bad thing; they’d never manage to prove any new results if they had to start from foundations for every proof!). However, mathematicians should be fairly confident that the proofs they create could be reduced down to proofs from the low-level axioms. So Eliezer is definitely right to be worried when a mathematician says “A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.”. A proof is a social construct, but it is one, very, very specific kind of social construct. The axioms and inference rules of first-order Peano arithmetic are symbolic representations of our most fundamental notion of what the natural numbers are. The reason for propositional logic, first-order logic, second-order logic, Peano arithmetic, and the scientific method is that humans have little things called “cognitive biases”. We are convinced by way too many things that should be utterly unconvincing. To say that a proof is a convincing social construct is...technically...correct (oh how it pains me to say that!)...but that very vague part of what it means for something to be a proof seems to imply that a proof is the utter antithesis of what it was meant for! A mathematical proof should be the most convincing social construct we have, because of how it is constructed.

First-order Peano arithmetic has just a few simple axioms, and a couple simple inference rules, and its symbols have a clear intended interpretation (in terms of the natural numbers (which characterize parts of the web of causality as already explained in the OP)). The truth of a few simple axioms and validity of a couple simple inference rules can be evaluated without our cognitive biases getting in the way. On the other hand, it’s probably not a good idea to make “There is a prime number larger than any given natural number.” an axiom of a formal system about the natural numbers, because it is not an immediate part of our intuitive understanding of how causal systems that behave according to the rules of the natural numbers behave. We as humans would have to be very, very, confused if a theorem of first-order Peano arithmetic (because we are so sure that its axioms are true and its inference rules are valid) turned out to be the negation of another theorem of Peano arithmetic, but not so confused if the same happened for ZFC set theory, because we do not so readily observe infinite sets in our day-to-day experience. The axioms and inference rules of first-order Peano arithmetic more directly correspond to our physical reality than those of ZFC set theory do (and the axioms and inference rules of the MIU system have nothing to do with our physical reality at all!). If a contradiction in first-order Peano arithmetic were found, though, life would go on. First-order Peano arithmetic does have a lot to do with our physical reality, but not all of it does. It inducts to numbers like 3^^^3 that we will probably never interact with. The ultrafinitists would be shouting “Told you so!”

Now I have said enough to give my direct response to the comment I am replying to. First of all, the dichotomy between “logic” and “mathematics” can be dissolved by referring to “formal systems” instead. A formal system is exactly as entwined with reality as its axioms and inference rules are. In terms of instrumental rationality, the more exotic theorems of ZFC set theory (and MIU) really don’t help us, unless we intrinsically enjoy considering the question “What if there were (even though we have no evidence that this is the case) a platonic realm of sets? How would it behave?”

When used as means to an end, the point of a formal system is to correct for our cognitive biases. In other words, the definition of a proof should state that a proof is a “convincing demonstration that should be convincing”, to begin with. I suspect Eliezer is so concerned with the Peano axioms because computer programs happen to evidently behave in a very, very mathematical way, and he believes that eventually a computer program will decide the fate of humanity. I share his concerns; I want a mathematical argument that the General Artificial Intelligence that will be created will be Friendly, not anything that might “convince” a few uninformed government officials.

• A few things:

1. I don’t think we disagree about the social construct thing: see my other comment where I’m talking about that.

2. It sounds like you pretty much come down in favour of the second position that I articulated above, just with a formalist twist. Mathematical talk is about what follows from the axioms; obviously only certain sets of axioms are worth investigating, as they’re the ones that actually line up with systems in the world. I agree so far, but you think that there is no notion of logic beyond the syntactic?

First of all, the dichotomy between “logic” and “mathematics” can be dissolved by referring to “formal systems” instead.

Aren’t you just dropping the distrinction between syntax and semantics here? One of the big points of the last few posts has been that we’re interested in the semantic implications, and the formal systems are a (sound) syntactic means of reaching true conclusions. From your post it sounds like you’re a pretty serious formalist, though, so that may not be a big deal to you.

• Definitely position two.

I would describe first-order logic as “a formal encapsulation of humanity’s most fundamental notions of how the world works”. If it were shown to be inconsistent, then I could still fall back to something like intuitionistic logic, but from that point on I’d be pretty skeptical about how much I could really know about the world, beyond that which is completely obvious (gravity, etc.).

What did I say that implied that I “think that there is no notion of logic beyond the syntactic”? I think of “logic” and “proof” as completely syntactic processes, but the premises and conclusions of a proof have to have semantic meaning; otherwise, why would we care so much about proving anything? I may have implied something that I didn’t believe, or I may have inconsistent beliefs regarding math and logic, so I’d actually appreciate it if you pointed out where I contradicted what I just said in this comment (if I did).

• Looking back, it’s hard to say what gave me that impression. I think I was mostly just confused as to why you were spending quite so much time going over the syntax stuff ;) And

First of all, the dichotomy between “logic” and “mathematics” can be dissolved by referring to “formal systems” instead.

made me think that you though that all logical/​mathematical talk was just talk of formal systems. That can’t be true if you’ve got some semantic story going on: then the syntax is important, but mainly as a way to reach semantic truths. And the semantics don’t have to mention formal systems at all. If you think that the semantics of logic/​mathematics is really about syntax, then that’s what I’d think of as a “formalist” position.

• Oh, I think I may understand your confusion, now. I don’t think of mathematics and logic as equals! I am more confident in first-order logic than I am in, say, ZFC set theory (though I am extremely confident in both). However, formal system-space is much larger than the few formal systems we use today; I wanted to emphasize that. Logic and set theory were selected for because they were useful, not because they are the only possible formal ways of thinking out there. In other words, I was trying to right the wrong question, why do mathematics and logic transcend the rest of reality?

• In contrast with my esteemed colleague RichardKennaway, I think it’s mostly #2. Before the Peano axioms, people talking about numbers might have been talking about any of a large class of things which discrete objects in the real world mostly model. It was hard to make progress in math past a certain level until someone pointed out axiomatically exactly which things-that-discrete-objects-in-the-real-world-mostly-model it would be most productive to talk about.

Concordantly, the situation of pre-axiom speakers is much like that of people from Scotland trying to talk to people from the American South and people from Boston, when none of them knows the rules of their grammar. Edit: Or, to be more precise, it’s like two scots speakers as fluent as Kawoomba talking about whether a solitary, fallen tree made a “sound,” without defining what they mean by sound.

• Aye, right. Yer bum’s oot the windae, laddie. Ye dinna need tae been lairnin a wee Scots tae unnerstan, it’s gaein be awricht! Ane leid is enough.

• What about “both ways simultaneously, the distinction left ambiguous most of the time because it isn’t useful”?

• EY seems to be taken with the resemblance between a causal diagram and the abstract structure of axioms, inferences and theorems in mathematcal logic. But there are differences: with causality, our evidence is the latest causal output, the leaf nodes. We have to trace back to the Big Bang from them.However, in maths we start from axioms, and cannot get directly to the theorems or leaf nodes. We could see this process as exploring a pre-existing territory, but it is hard to see what this adds, since the axioms and rules of inference are sufficient for truth, and it is hard to see, in EY’s presentation how literally he takes the idea.

• Er, no, causal models and logical implications seem to me very different in how they propagate modularly. Unifying the two is going to be troublesome.

• We could see this process as exploring a pre-existing territory, but it is hard to see what this adds, since the axioms and rules of inference are sufficient for truth, and it is hard to see, in EY’s presentation how literally he takes the idea.

It’s useful for reasoning heuristically about conjectures.

• Could I have an example?

• I would read this:

axioms pin down that we’re talking about numbers as opposed to something else.

as:

axioms pin down that we’re talking about some system that behaves like numbers as opposed to something else.

Lots of things in both real and imagined worlds behave like numbers. It’s most convenient to pick one of them and call them “The Numbers” but this is really just for the sake of convenience and doesn’t necessarily give them elevated philosophical status. That would be my position anyway.

• The Peano Arithmetic talks about the Successor function, and jazz. Did you know that the set of finite strings of a single symbol alphabet also satisfies the Peano Axioms? Did you know that in ZFC, defining the set all sets containing only other members of the parent set with lower cardinality, and then saying {} is a member obeys the Peano Axioms? Did you know that saying you have a Commutative Monoid with right division, that multiplication with something other than identity always yields a new element and that the set {1} is productive, obey the Peano Axioms? Did you know the even naturals obey the Peano Axioms? Did you know any fully ordered set with infimum, but no supremum obey the Axioms?

There is no such thing as “Numbers,” only things satisfying the Peano Axioms.

• Did you know that the set of finite strings of a single symbol alphabet also satisfies the Peano Axioms?

Surely the set of finite strings in an alphabet of no-matter-how-many-symbols satisfies the Peano axioms? e.g. using the English alphabet (with A=0, B=S(A), C=S(B)....AA=S(Z), AB=S(AA), etc would make a base-26 system).

• Single symbol alphabet is more interesting, (empty string = 0, sucessor function = append another symbol) the system you describe is more succinctly described using a concatenation operator:

• 0 = 0, 1 = S0, 2 = S1 … 9 = S8.

• For All b in {0,1,2,3,4,5,6,7,8,9}, a in N: ab = a x S9 + b

From these definitions we get, example-wise:

• 10 = 1 x S9 + 0 = SSSSSSSSSS0

• I’m not quite sure what you’re saying here—that “Numbers” don’t exist as such but “the even naturals” do exist?

• We don’t know whether the universe is finite or not. If it is finite, then there is nothing in it that fully models the natural numbers. Would we then have to say that the numbers did not exist? If the system that we’re referring to isn’t some physical thing, what is it?

• Finite subsets of the naturals still behave like naturals.

• Not precisely. In many ways, yes, but for example they don’t model the axiom of PA that says that every number has a successor.

• I’ve realised that I’m slightly more confused on this topic than I thought.

As non-logically omniscient beings, we need to keep track of hypothetical universes which are not just physically different from our own, but which don’t make sense—i.e. they contain logical contradictions that we haven’t noticed yet.

For example, let T be a Turing machine where we haven’t yet established whether or not T halts. Then one of the following is true but we don’t know which one:

• (a) The universe is infinite and T halts

• (b) The universe is infinite and T does not halt

• (c) The universe is finite and T halts

• (d) The universe is finite and T does not halt

If we then discover that T halts, we not only assign zero probability to (b) and (d), we strike them off the list entirely. (At least that’s how I imagine it, I haven’t yet heard anyone describe approaches to logical uncertainty).

But it feels like there should also be (e) - “the universe is finite and the question of whether or not T halts is meaningless”. If we were to discover that we lived in (e) then all infinite universes would have to be struck off our list of meaningful hypothetical universes, since we are viewing hypothetical universes as mathematical objects.

But it’s hard to imagine what would constitute evidence for (or against) (e). So after 5 minutes of pondering, that more or less maps out my current state of confusion.

• I think you’re confused if you think the finitude of the universe matters in answering the mathematical question of whether T halts. Answering that question may be of interest for then figuring out whether certain things in our universe that behave like Turning machines behave in certain ways, but the mathematical question is independent.

Your confusion is that you think there need to be objects of some kind that correspond to mathematical structures that we talk about. Then you’ve got to figure out what they are, and that seems to be tricky however you cut it.

• I agree that the finitude of the universe doesn’t matter in answering the mathematical question of whether T halts. I was pondering whether the finitude of the universe had some bearing on whether the question of T halting is necessarily meaningful (in an infinite universe it surely is meaningful, in a finite universe it very likely is but not so obviously so).

• Surely if the infinitude of the universe doesn’t affect that statement’s truth, it can’t affect that statement’s meaningfulness? Seems pretty obvious to me that the meaning is the same in a finite and an infinite universe: you’re talking about the mathematical concept of a Turing machine in both cases.

• Conditional on the statement being meaningful, infinitude of the universe doesn’t affect the statement’s truth. If the meaningfulness is in question then I’m confused so wouldn’t assign very high or low probabilities to anything.

Essentially:

• I have a very strong intuition that there is a unique (up to isomorphism) mathematical structure called the “non-negative integers”

• I have a weaker intuition that statements in second-order logic have a unique meaningful interpretation

• I have a strong intuition that model semantics of first-order logic is meaningful

• I have a very strong intuition that the universe is real in some sense

It’s possible that my intuition might be wrong though. I can picture the integers in my mind but my picture isn’t completely accurate—they basically come out as a line of dots with a “going on forever” concept at the end. I can carry on pulling dots out of the “going on forever”, but I can’t ever pull all of them out because there isn’t room in my mind.

Any attempt to capture the integers in first-order logic will permit nonstandard models. From the vantage point of ZF set theory there is a single “standard” model, but I’m not sure this helps—there are just nonstandard models of set theory instead. Similarly I’m not sure second-order logic helps as you pretty much need set theory to define its semantics.

So if I’m questioning everything it seems I should at least be open to the idea of there being no single model of the integers which can be said to be “right” in a non-arbitrary way. I’d want to question first order logic too, but it’s hard to come up with a weaker (or different) system that’s both rigorous and actually useful for anything.

I’ve realized one thing though (based on this conversation) - if the universe is infinite, defining the integers in terms of the real world isn’t obviously the right thing to do, as the real world may be following one of the nonstandard models of the integers. Updating in favor of meaningfulness not being dependent on infinitude of universe.

• I’m a little confused as to which of two positions this is advocating:

1. Numbers are real, serious things, but the way that we pick them out is by having a categorical set of axioms. They’re interesting to talk about because lots of things in the world behave like them (to some degree).

2. Mathematical talk is actually talk about what follows from certain axioms. This is interesting to talk about because lots of things obey the axioms and so exhibit the theorems (to some degree).

I read it as (1), with a side order of (2). Mathematical talk is also about what follows from certain axioms. The axioms nail it down so that mathematicians can be sure what other mathematicians are talking about.

Both of these have some problems. The first one requires you to have weird, non-physical numbery-things.

Not weird, non-physical numbery-things, just non-physical numbery-things. If they seem weird, maybe it’s because we only noticed them a few thousand years ago.

Not only this, but they’re a special exception to the theory of reference that’s been developed so far, in that you can refer to them without having a causal connection.

No more than a magnetic field is a special exception to the theory of elasticity. It’s just a phenomenon that is not described by that theory.

• But EY insists that maths does come under correspondence/​reference!

“to delineate two different kinds of correspondence within correspondence theories of truth.”″

• I think it’s worth mentioning explicitly that the second-order axiom introduced is induction.

• Do we need a process for figuring out which objects are likely to behave like numbers? And as good Bayesians, for figuring out how likely that is?

• Er, yes? I mean it’s not like we’re born knowing that cars behave like integers and outlet electricity doesn’t, since neither of those things existed ancestrally.

• I’m pretty sure that we’re born knowing cars and carlike objects behave like integers.

• I think our eyes (or visual cortex) knows that certain things (up to 3 or 4 of them) behave like integers since it bothers to count them automatically.

• Wait, what? We may not be born knowing what cars and electricity are, but I would be surprised if we weren’t born with an ability (or the capacity to develop an ability) to partition our model of a car-containing section of universe into discrete “car” objects, while not being able to do the same for “electric current” objects.

• The ancestral environment included people (who behave like integers over moderate time spans) and water (which doesn’t behave like integers)..

The better question would have been “how do people identify objects which behave like integers?”.

• The better question would have been “how do people identify objects which behave like integers?”.

The same way we identify objects which satisfy any other predicate? We determine whether or not something is a cat by comparing it to our knowledge of what cats are like. We determine whether or not something is dangerous by comparing it to our knowledge of what dangerous things are like.

Why do you ask this question specifically of the integers? Is there something special about them?

• Water does behave like very large integers.

• So does electricity. (And it does so exactly, whereas water contains different isotopes of hydrogen and oxygen...)

Anyway, I seem to recall seeing a Wikipedia article about some obscure language where the word for ‘water’ is grammatically plural, and thinking ‘who knows if they’ve coined a backformed singular for “water molecule”, at least informally or jocularly’.

(Note also that natural languages don’t seem to have fixed rules for whether nouns like “rice” or “oats”—i.e. collections of small objects you could count but you would never normally bother to—are mass nouns or plural nouns.)

• If you’re going to insist that different isotopes disrupt the whole number quality of water, then fractional-charge quasiparticles would like a word with your allegation that electricity can be completely and exactly modeled using integers.

• How do you determine whether a physical process “behaves like integers”? The second-order axiom of induction sounds complicated, I cannot easily check that it’s satisfied by apples. If you use some sort of Bayesian reasoning to figure out which axioms work on apples, can you describe it in more detail?

• I don’t have an answer to the specific question, only to the class of questions. To approach understanding this, we need to distinguish between reality and what points to reality, i.e, symbols. Our skill as humans is in the manipulation of symbols, as a kind of simulation of reality, with greater or lesser workability for prediction, based in prior observation, of new observations.

“Apples” refers, internally, to a set of responses we created through our experience. We respond to reality as an “apple” or as a “set of apples,” only out of our history. It’s arbitrary. Counting, and thus “behavior like integers” applies to the simplified, arbitrary constructs we call “apples.” Reality is not divided into separate objects, but we have organized our perceptions into named objects.

Examples. If an “apple” is a unique discriminable object, say all apples have had a unique code applied to them, then what can be counted is the codes. Integer behavior is a behavior of codes.

Unique applies can be picked up one at a time, being transferred to one basket or another. However, real apples are not a constant. Apples grow and apples rot. Is a pile of rotten apple an “apple”? Is an apple seed an apple? These are questions with no “true” answer, rather we choose answers. We end up with a binary state for each possible object: “yes, apple,” or “no, not apple.” We can count these states, they exist in our mind.

If “apple” refers to a variety, we may have Macintosh, Fuji, Golden delicious, etc.

So I have a basket with two apples in it. That is, five pieces of fruit that are Macintosh and three that are Fuji.

I have another basket with two apples in it. That is, one Fuji and one Golden Delicious.

I put them all into one basket. How many apples are in the basket? 2 + 2 = 3.

The question about integer behavior is about how categories have been assembled. If “apple” refers to an individual piece of intact fruit, we can pick it up, move it around, and it remains the same object, it’s unique and there is no other the same in the universe, and it belongs to a class of objects that is, again, unique as a class, the class is countable and classes will display integer behavior.

That’s as far as I’ve gotten with this. “Integer behavior” is not a property of reality, per se, but of our perceptions of reality.

• Well, it comes from the fact that apples in a bowl is Exclusively just that, as verified by your Bayesian reasoning. There are no other “chains” of successors (shadow apples? I can’t even imagine a good metaphor).

So, now you in fact have that bowl of apples narrowed down to {0, S0, SS0, SSS0, …} which is isomorphic to the natural numbers, so all other natural number properties will be reflected there.

• For thousands of years, mathematicians tried proving the parallel postulate from Euclid’s four other postulates, even though there are fairly simple counterexamples which show such a proof to be impossible. I suspect that at least part of the reason for this delay is a failure to appreciate this post’s point : that a “straight line”, like a “number” has to be defined/​specified by a set of axioms, and that a great circle is in fact a “straight line” as far as the first four of Euclid’s postulates are concerned.

• That’s not correct. Elliptic geometry fails to satisfy some of the other postulates, depending on how they are phrased. I’m not too familiar with the standard ways of making Euclid’s postulates rigorous, but if you’re looking at Hilbert’s axioms instead, then elliptic geometry fails to satisfy O3 (the third order axiom): if three points A, B, C are on a line, then any of the points is between the other two. Possibly some other axioms are violated as well.

Notably, elliptic geometry does not contain any parallel lines, while it is a theorem of neutral geometry that parallel lines do in fact exist.

Hyperbolic geometry was actually necessary to prove the independence of Euclid’s fifth postulate, and few would call it a “fairly simple counterexample”.

I agree that introducing elliptic geometry (and other simple examples like the Fano plane) earlier on in history would have made the discussion of Euclid’s fifth postulate much more coherent much sooner.

• Awesome, I was looking for a good explanation of the Peano axioms!

About six months ago I had a series of arguments with my housemate, who’s been doing a philosophy degree at a Catholic university. He argued that I should leave the door open for some way other than observation to gather knowledge, because we had things like maths giving us knowledge in this other way, which meant we couldn’t assume we’d come up with some other other way to discover, say, ethical or aesthetic truths.

I couldn’t convince him that all we could do in ethics was reason from axioms, because he didn’t understand that maths was just reasoning from axioms—and I didn’t actually understand the Peano axioms, so I couldn’t explain them.

So, thanks for the post.

• “The axioms aren’t things you’re arbitrarily making up, or assuming for convenience-of-proof, about some pre-existent thing called numbers. You need axioms to pin down a mathematical universe before you can talk about it in the first place. The axioms are pinning down what the heck this ‘NUM-burz’ sound means in the first place—that your mouth is talking about 0, 1, 2, 3, and so on.”

Ok NOW I finally get the whole Peano arithmetic thing. …Took me long enough. Thanks kindly, unusually-fast-thinking mathematician!

• The boundary between physical causality and logical or mathematical implication doesn’t always seem to be clearcut. Take two examples.

(1) The product of two and an integer is an even integer. So if I double an integer I will find that the result is even. The first statement is clearly a timeless mathematical implication. But by recasting the equation as a procedure I introduce both an implied separation in time between action and outcome, and an implied physical embodiment that could be subject to error or interruption. Thus the truth of the second formulation strictly depends on both a mathematical fact and physical facts.

(2) The endpoint of a physical process is causally related to the initial conditions by the physical laws governing the process. The sensitivity of the endpoint to the initial conditions is a quite separate physical fact, but requires no new physical laws: it is a mathematical implication of the physical laws already noted. Again, the relationship depends on both physical and mathematical truths.

Is there a recognized name for such hybrid cases? They could perhaps be described as “quasi-causal” relationships.

• Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.

My short answer is “because we live in a causal universe”.

To expand on that:

Logic is a process that has been specifically designed to be stable. Any process that has gone through a design specifically intended to make it stable, and refined for stability over generations, is going to have a higher probability of being stable. Logic, in short, is more likely than anything else in the universe to be stable.

So then the question is not why logic specifically is stable—that is by design—but rather whether it is possible for anything in the universe to be stable. And there is one thing that does appear to be stable; that if you have the same cause, then you will have the same effect. That the universe is (at least mostly) causal. It is that causality that gives logic its stability, as far as I can see.

• “Because if you had another separated chain, you could have a property P that was true all along the 0-chain, but false along the separated chain. And then P would be true of 0, true of the successor of any number of which it was true, and not true of all numbers.”

But the axiom schema of induction does not completely exclude nonstandard numbers. Sure if I prove some property P for P(0) and for all n, P(n) ⇒ P(n+1) then for all n, P(n); then I have excluded the possibility of some nonstandard number “n” for which not P(n) but there are some properties which cannot be proved true or false in Peano Arithmetic and therefore whose truth hood can be altered by the presence of nonstandard numbers.

Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may be mistaken.

• Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions?

Pn(x) is “x is the nth successor of 0” (the 0th successor of a number is itself). P(x) is “there exists some n such that Pn(x)”.

• I don’t see how you would define Pn(x) in the language of PA.

Let’s say we used something like this:

``````Pn(x) iff ((0 + n) = x)
``````

Let’s look at the definition of +, a function symbol that our model is allowed to define:

``````a + 0 = a
a + S(b) = S(a + b)
``````

“x + 0 = x” should work perfectly fine for nonstandard numbers.

So going back to P(x):

“there exists some n such that ((0 + n) = x)”

for a nonstandard number x, does there exist some number n such that ((0+n) = x)? Yup, the nonstandard number x! Set n=x.

Oh, but when you said nth successor you meant n had to be standard? Well, that’s the whole problem isn’t it!

• But any nonstandard number is not an nth successor of 0 for any n, even nonstandard n (whatever that would mean). So your rephrasing doesn’t mean the same thing, intuitively—P is, intuitively, “x is reachable from 0 using the successor function”.

Couldn’t you say:

• P0: x = 0

• PS0: x = S0

• PSS0: x = SS0

and so on, defining a set of properties (we can construct these inductively, and so there is no Pn for nonstandard n), and say P(x) is “x satisfies one such property”?

• An infinite number of axioms like in an axiom schema doesn’t really hurt anything, but you can’t have infinitely long single axioms.

``````∀x((x = 0) ∨ (x = S0) ∨ (x = SS0) ∨ (x = SSS0) ∨ ...)
``````

is not an option. And neither is the axiom set

``````P0(x) iff x = 0
PS0(x) iff x = S0
PSS0(x) iff x = SS0
...
∀x(P0(x) ∨ PS0(x) ∨ PS0(x) ∨ PS0(x) ∨ ...)
``````

We could instead try the axioms

``````P(0, x) iff x = 0
P(S0, x) iff x = S0
P(SS0, x) iff x = SS0
...
∀x(∃n(P(n, x)))
``````

but then again we have the problem of n being a nonstandard number.

• What is n?

• It’s non-strictly-mathematical shorthand for this.

• Not sure if I understand the point of your argument.

Are you saying that in reality every property P has actually three outcomes: true, false, undecidable? And that those always decidable, like e.g. “P(n) <-> (n = 2)” cannot be true for all natural numbers, while those which can be true for all natural numbers, but mostly false otherwise, are always undecidable for… some other values?

Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may be mistaken.

I don’t know.

Let’s suppose that for any specific value V in the separated chain it is possible to make such property PV. For example “PV(x) <-> (x <> V)”. And let’s suppose that it is not possible to make one such property for all values in all separated chains, except by saying something like “P(x) <-> there is no such PV which would be true for all numbers in the first chain and false for x”.

What would that prove? Would it contradict the article? How specifically?

• Are you saying that in reality every property P has actually three outcomes: true, false, undecidable?

By Godel’s incompleteness theorem yes, unless your theory of arithmetic has a non-recursively enumerable set of axioms or is inconsistent.

And that those always decidable, like e.g. “P(n) <-> (n = 2)” cannot be true for all natural numbers, while those which can be true for all natural numbers, but mostly false otherwise, are always undecidable for… some other values?

I’m having trouble understanding this sentence but I think I know what you are asking about.

There are some properties P(x) which are true for every x in the 0 chain, however, Peano Arithmetic does not include all these P(x) as theorems. If PA doesn’t include P(x) as a theorem, then it is independent of PA whether there exist nonstandard elements for which P(x) is false.

Let’s suppose that for any specific value V in the separated chain it is possible to make such property PV. What would that prove? Would it contradict the article? How specifically?

I think this is what I am saying I believe to be impossible. You can’t just say “V is in the separated chain”. V is a constant symbol. The model can assign constants to whatever object in the domain of discourse it wants to unless you add axioms forbidding it.

Honestly I am becoming confused. I’m going to take a break and think about all this for a bit.

• If our axiom set T is independent of a property P about numbers then by definition there is nothing inconsistent about the theory T1 = “T and P” and also nothing inconsistent about the theory T2= “T and not P”.

To say that they are not inconsistent is to say that they are satisfiable, that they have possible models. As T1 and T2 are inconsistent with each other, their models are different.

The single zero-based chain of numbers without nonstandard numbers is a single model. Therefore, if there exists a property about numbers that is independent of any theory of arithmetic, that theory of arithmetic does not logically exclude the possibility of nonstandard elements.

By Godel’s incompleteness theorems, a theory must have statements that are independent from it unless it is either inconsistent or has a non-recursively-enumerable theorem set.

Each instance of the axiom schema of induction can be constructed from a property. The set of properties is recursively enumerable, therefore the set of instances of the axiom schema of induction is recursively enumerable.

Every theorem of Peano Arithmetic must use a finite number of axioms in its proof. We can enumerate the theorems of Peano Arithmetic by adding increasingly larger subsets of the infinite set of instances of the axiom schema of induction to our axiom set.

Since the theory of Peano Arithmetic has a recursively enumerable set of theorems it is either inconsistent or is independent of some property and thus allows for the existence of nonstandard elements.

• But the axiom schema of induction does not completely exclude

Eliezer isn’t using an axiom schema, he’s using an axiom of second order logic.

• I don’t see what the difference is… They look very similar to me.

At some point you have to translate it into a (possibly infinite) set of first-order axioms or you wont be able to perform first-order resolution anyway.

• Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may be mistaken.

For any number n, n-n=0.

If you have a separate chain that isn’t connected to zero, then this isn’t true.

However this statement is pretty simple and can be expressed in first order logic. I have no idea why EY believes that it requires second order logic to eliminate the possibility of other chains that aren’t derived from zero.

• I love this inquiry.

Numbers do not appear in reality, other than “mental reality.” 2+2=4 does not appear outside of the mind. Here is why:

To know that I have two objects, I must apply a process to my perception of reality. I must recognize the objects as distinct, I must categorize them as “the same” in some way. And then I apply another process, “counting.” That is applied to my collected identifications, not to reality itself, which can just as easily be seen as unitary, or sliced up in a practically infinite number of ways.

Number, then, is a product of brain activity, and the observed properties of numbers are properties of brain process. Some examples.

I put two apples in a bowl. I put two more apples in the bowl. How many apples are now in the bowl?

We may easily say “four,” because most of the time this prediction holds. However, it’s a mixing bowl, used as a blender, and what I have now is a bowl of applesauce. How many apples are in the bowl? I can’t count them! I put four apples in, and none come out! Or some smaller number than four. Or a greater number (If I add some earth, air, fire, and water, and wait a little while....)

Apples are complex objects. How about it’s two deuterium molecules? (Two deuterons each, with two electrons, electronically bound.) How about the bowl is very small, confining the molecules, reducing their freedom of movement, and their relative momentum is, transiently, close to zero?

How many deuterons? Initially, four, but … it’s been calculated that after a couple of femtoseconds, there are none, there is one excited atom of Beryllium-8, which promptly decays into two helium nuclei and a lot of energy. In theory. It’s only been calculated, it’s not been proven, it merely is a possible explanation for certain observed phenomena. Heh!

The point here: the identity of an object, the definition of “one,” is arbitrary, a tool, a device for organizing our experience of reality. What if it’s two red apples and two green apples? They don’t taste the same and they don’t look the same, at least not entirely the same. What we are counting is the identified object, “apple.” Not what exists in reality. Reality exists, not “apples,” except in our experience, largely as a product of language.

The properties of numbers, so universally recognized, follow from the tools we evolved for predicting behavior, they are certainly not absolutes in themselves.

Hah! “Certainly.” That, with “believe” is a word that sets off alarms.

• The fact that one apple added to one apple invariably gives two apples....

It’s almost a tautology. What we have is an iterated identification. There are two objects that are named “apple,” they are identical in identification, but separate and distinct. This appears in time. I’m counting my identifications. The universality of 1+1 = 2 is a product of a single brain design. For an elephant, the same “problem” might be “food plus food equals food.”

• Basically, you’re saying that for an elephant, apples behave like clouds, because the elephant has a concept of apple that is like our concept of cloud. (I hope real elephants aren’t this dumb). I like this a lot, it clarifies what I felt was missing from the cloud analogy.

Having it explicitly stated is helpful. It leads to the insight that at bottom, outside of directly useful concepts and into pure ontology/​epistemology, there are no isolated individual integers. There is only relative magnitude on a broad continuum. This makes approaching QM much simpler.

• Mmmm. This is all projected onto elephants, but maybe something like what you say. I was just pointing to a possible alternate processing mode. An elephant might well recognize quantity, but probably not through counting, which requires language. Quantity might be recognized directly, by visual comparison, for example. Bigger pile/​smaller pile. More attraction vs. less attraction, therefore movement toward bigger pile. Or smell.

• I can’t figure out why you’re getting downvotes though.

1. I’m doing something right.

2. I’m doing something wrong.

3. I write too much.

4. I don’t explain well enough.

5. It’s Thursday.

6. I have a strange name.

7. I’m Muslim.

8. I’m sensible.

9. I’m not.

10. It means nothing, which also means nothing.

11. Something else.

Thanks, chaosmosis, that was a nice thing to say. ….

• (So far I’ve downvoted many of your comments that contained what I believe to be confused/​mystical thinking, dubious statements of unclear meaning that I expect can’t be made clear by unpacking (whatever their poetic qualities may be); also, for similar reasons, some conversations that I didn’t like taking place, mostly with chaosmosis, where I downvoted both sides.)

• Thanks, Vladimir. From where does “what I believe” and what “I expect” come? What is the source of “I didn’t like”?

Would you be more specific? It could be helpful. (Somewhere if not here?)

I “retracted” the list post because it had three net downvotes and to see what “retract” accomplishes here, and because I’m willing to retract any ineffective communication, “right” and “wrong” have almost nothing to do with it. It was still a nice thing for chaosmosis to say.

• You do write unusually long comments and it’s slightly irritating (although I have not downvoted you so far).

• Yeah, thanks, Alicorn. I’ve been “conferencing”—as we used to call it in the 80s—for a long time, and I know the problem. I actually love the up/​down voting system here. I gives me some fairly fast feedback as to how I’m occurring to others. I’m primarily here to learn, and learning to communicate effectively in a new context has always brought rewards to me.

Ah, one more thing I’ll risk adding here. This is a Yudkowsky thread and discussing my posting may be seriously off-topic. I need to pay more attention to context.

• I need to pay more attention to context.

LessWrong is like digression central. Someone will make a post talking about evolutionary psychology, and they’ll mention bow and arrows in an example, and then someone else will respond with a study about how bow and areas weren’t used until X date, and then a debate will happen, and then it will go meta, and then, etc.

• I downvoted this one. HAHAHAHA. Chaotic neutral, my friend associate.

In seriousness it was lengthy and not super humorous. Also, you’re Muslim.

• Would you argue, then, that aliens or AIs might not discover the fact that 1 + 1 = 2, or even consider it a fact at all?

• Okay, I don’ t have to speculate or argue. I’m an alien, and I don’t consider it a “fact,” unless fact is defined to include the consequences of language. I.e, as an alien, I can see your process, and, within your process, I see that “1 + 1 = 2″ is generally useful to your survival. That I’ll accept as a fact. However, if you believe that 1 + 1 = 2 is a “fact,” such that 1 + 1 <> 2 is necessarily “false,” I think you might be unnecessarily limited, harming long-term survival.

It’s also useful to my survival, normally. Sometimes not. Sometimes 1 + 1 = 1, or 1 + 1 = 0, work better. I’m not kidding.

The AI worth thinking about is one which is greater than human, so that a human can recognize the limitation of fixed arithmetic indicates to me that a super-human AI would be able to do that or more.

• 2 Nov 2012 1:32 UTC
−2 points
Parent

It seems to me like there are at least two possible definitions of “reality”. One is “the set of all true statements” (such as “the sky is blue”, “salt dissolves in water”, and “2 + 2 = 4”), and the other is “the set of all ‘aspects of the world’”—things like “in world W, there is an electron at point p and time t”, whose truth could, in principle, be varied independently of all other “aspects of the world”.

The choice of definition seems more or less arbitrary.

• Okay, I’ll explore this. “Reality” is independent of statements and language. An “aspect” is a point of view, or a thing seen from some point of view.

To be fair, however, I think Reality is not susceptible to ordinary definition, it can only be pointed to, hinted at. Definitions are indeed arbitrary, reality is not.

Or, stated another way, reality may be defined as that of which there is only one, that is not owned by us, but owns us, that existed when we were not, that will continue to exist when we are no longer. Individually or all of us or the entire universe.

On the other hand, reality does not exist in the same way as we exist or that things exist. Every statement that attempts to capture reality, at least so far as I’ve seen, fails.

Reality is assumed to exist in applying the scientific method. We trust that there is a single reality, not many.

Understandings of reality are many. Opinions are many. Systems of language are many..

Reality is neither true nor false, rather we create true and false as relationships we invent between statements and reality. These inventions are not actually true or false; they are useful or not-useful. A “truth” can be very useful fora time and turn out to be limiting as the scope of application of a statement is widened.

• Humans need fantasy to be human.

“Tooth fairies? Hogfathers? Little—”

Yes. As practice. You have to start out learning to believe the little lies.

“So we can believe the big ones?”

Yes. Justice. Mercy. Duty. That sort of thing.

“They’re not the same at all!”

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

• Susan and Death, in Hogfather by Terry Pratchett

So far we’ve talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical reference by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?

• (Note: this is my first post. I may be wrong, and if so am curious as to how. Anyway, I figure it’s high time that my beliefs stick their neck out. I expect this will hurt, and apologize now should I later respond poorly.)

This may be the answer to a different question, but...

I play lots of role-playing games. Role-playing games are like make-believe; events in them exist in a shared counter-factual space (in the players’ imagination). Make-believe has a problem: if two people imagine different things, who is right? (This tends to end with a bunch of kids arguing about whether the fictional T-Rex is alive or dead).

Role-playing games solve this problem by handing authority over various facets of the game to different things. The protagonists are controlled by their respective players, the results of choices by dice and rules, and most of the fictional world by the Game Master.*

So, in a role-playing game, when you ask what is true[RPG], you should direct that question to the appropriate authority. Basically, truth[RPG] is actually canon (in the fandom sense; TV Trope’s page is good, but comes with the usual where-did-my-evening-go caveats).

Similarly, if we ask “where did Luke Skywalker go to preschool?”, we’re asking a question about canon.

That said, even canon needs to be internally consistent. If someone with authority were to claim that Tatooine has no preschools, then we can conclude that Luke Skywalker didn’t go to preschool. If an authority claims two inconsistent things, we can conclude that the authority is wrong (namely, in the mathematical sense the canon wouldn’t match any possible model).

I’ve long felt that ideas like morality and liberty are a variety of canon.

Specifically, you can have authorities (a religion or philosopher telling you stuff), and those authorities can be provably wrong (because they said something inconsistent), but these ideas exists in a kind of shared imaginary space. Also, people can disagree with the canon and make up their own ideas.

Now, that space is still informed by reality. Even in fiction, we expect gravity to drop off as the square of distance, and we expect solid objects to be unable to pass through each other.** With ideas, we can state that they are nonsensical (or, at minimum, not useful) if they refers to real things which don’t exist. A map of morality is a map of a non-real thing, but morality must interface with reality to be useful, so anywhere the interface doesn’t line up with reality, morality (or its map) is wrong.

*This is one possible breakdown. There are many others.

**In most games/​stories, anyway. At first glance I’d expect morality to be better bound to reality, but I suppose there’s been plenty of people who’s moral system boiled down to “don’t do anything Ma’at would disapprove of”, backed up with concepts like the literal weight of sin (vs. the weight of a feather).

• It so happens that the three “big lies” death mentions are all related to morality/​ethics, which is a hard question. But let me take the conversation and change it a bit:

“So we can believe the big ones?”

Yes. Anger. Happiness. Pain. That sort of thing.

“They’re not the same at all!”

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of happiness, one molecule of pain.

In this version, the final argument is still correct—if I take the universe and grind it down to a sieve, I will not be able to say “woo! that carbon atom is an atom of happiness”. Since the penultimate question of this meditation was “Is there anything else”, at least I can answer that question.

Clearly, we want to talk about happiness for many reasons—even if we do not value happiness in itself (for ourselves or others), predicting what will make humans happy is useful to know stuff about the world. Therefore, it is useful to find a way that allows us to talk about happiness. Happiness, though, is complicated, so let us put it aside for a minute to ponder something simpler: a solar system. I will simplify here, a solar system is one star and a bunch of planets rotating around it. Though solar systems effect each other through gravity or radiation, most of the effects of the relative motions inside a solar system comes from inside itself, and this pattern repeats itself throughout the galaxy. Much like happiness, being able to talk about solar systems is useful—though I do not particularly value solar systems in and of themselves, it’s useful to have a concept of “a solar system”, which describes things with commonalities, and allows me to generalize.

If I grind the universe, I cannot find an atom that is a solar system atom—grinding the universe down destroys the “solar system” useful pattern. For bounded minds, having these patterns leads to good predictive strength without having to figure out each and every atom in the solar system.

In essence, happiness is no different than solar system—both are crude words to describe common patterns. It’s just that happiness is a feature of minds (mostly human minds, but we talk about how dogs or lizards are happy, sometimes, and it’s not surprising—those minds are related algorithms). I cannot say where every atom is in the case of a human being happy, but some atom configurations are happy humans, and some are not.

So: at the very least, happiness and solar systems are part of the causal network of things. They describe patterns that influence other patterns.

Mercy is easier than justice and duty. Mercy is a specific configuration of atoms behaving a human in a specific way—even though the human feels they are entitled to cause another human hurt (“feeling entitled” is a set of specific human-mind-configurations, regardless of whether “entitlement” actually exists), but does not do so (for specific reasons, etc. etc.). In short, mercy describes specific patterns of atoms, and is part of causal networks.

Duty and justice—I admit that I’m not sure what my reductionist metaethics are, and so it’s not obvious what they mean in the causal network.

• We could make it even easier :P

You say that a tiger has stripes, but I looked at some tiger atoms and didn’t see any stripes.

The harder question is what is a valid way of figuring out the important properties of the system.

• The statement that the world is just is a lie. There exist possible worlds that are just—for instance, these worlds would not have children kidnapped and forced to kill—and ours is not one of them.

Thus, justice is a meaningful concept. Justice is a concept defined in terms of the world (pinned-down causal links) and also irreducibly normative statements. Normative statements do not refer to “the world”. They are useful because we can logically deduce imperatives from them. “If X is just, then do X.” is correct, that is:

Do the right thing.

• I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement “If X is just, then do X” is correct. Could you elaborate further?

In general, I don’t think it is a sufficient test for the meaningfulness of a property to say “I can imagine a universe which has/​lacks this property, unlike our universe, therefore it is meaningful.”

• the statement “If X is just, then do X”

That’s an instruction, not a statement.

• I did not intend to explain how i arrived at this conclusion. I’m just stating my answer to the question.

Do you think the statement “If X is just, then do X” is wrong?

• Like army1987 notes, it is an instruction and not a statement. Considering that, I think “if X is just, then do X” is a good imperative to live by, assuming some good definition of justice. I don’t think I would describe it as “wrong” or “correct” at this point.

• OK. Exactly what you call it is unimportant.

What matters is that it gives justice meaning.

• It may be incomplete. Do you have a place for Mercy?

• The reason I’m not making distinctions among different moral words, though such distinctions exist in language, is that it seems the only new problem created by these moral words is understanding morality. Once you understand right and wrong, just and unjust can be defined just like you define regular words, even if something can be just but immoral.

• In general, I don’t think it is a sufficient test for the meaningfulness of a property to say “I can imagine a universe which has/​lacks this property, unlike our universe, therefore it is meaningful.”

Um, mathematics.

• I can’t imagine a universe without mathematics, yet I think mathematics is meaningful. Doesn’t this mean the test is not sufficient to determine the meaningfulness of a property?

Is there some established thinking on alternate universes without mathematics? My failure to imagine such universes is hardly conclusive.

• Sorry, misread what you wrote in the grand parent. I agree with you.

• I would find them under the category of patterns.

A neural network is very good at recognising patterns; and human brains run on a neural network architecture. Given a few examples of what a word does or does not mean, we can quickly recognise the pattern and fit it into our vocabulary. (Apparently, this can be used in language classes; the teacher will point to a variety of objects, indicating whether they are or are not vrugte, for example; and it won’t take that many examples before the student understands that vrugte means fruit but not vegetables).

Justice and mercy are not patterns of objects, but rather patterns of action. The man killed his enemy, but has a wife and children to support; sending him to Death Row might be just, but letting him have some way of earning money while imprisoned might be merciful. Similarly, happy, sad, and angry are emotional patterns; a person acts in this way when happy, and acts in that way when sad.

• I was going to say that yes, I think there is another kind of thing that can be meaningfully talked about, and “justice” and “mercy” and “duty” have something to do with that sort of thing, but a more prototypical example would be “This court has jurisdiction”. Especially if many experts were of the opinion that it didn’t, but the judge disagreed, but the superior court reversed her, and now the supreme court has decided to hear the case.

But then I realized that there was something different about that kind of “truth”: I would not want an AI to assign a probability to the proposition The court did, in fact, have jurisdiction (nor to, oh, It is the duty of any elected official to tell the public if they learn about a case of corruption, say). I think social constructions can technically be meaningfully talked about among humans, and they are important as hell if you want to understand human communication and behavior, but I guess on reflection I think that the fact that I would want an AI to reason in terms of more basic facts is a hint that if we are discussing epistemology, if we’re discussing what sorts of thingies we can know about and how we can know about them, rather than discussing particular properties of the particularly interesting thingies called humans, then it might be best to say that “The judge wrote in her decision that the court had jurisdiction” is a meaningful statement in the sense under consideration, but “The court had jurisdiction” is not.

• I’ve thought about this for a while, and I feel like you can replace “Fantasy” and “Lies” with “Patterns” in that dialogue, and have it make sense, and it also appears to be an answer to your questions. That being said, it also feels like a sort of a cached thought, even though I’ve thought about it for a while. However, I can’t think of a better way to express it and all of the other thoughts I had appeared to be significantly lower caliber and less clear.

Considering that, I should then ask “Why isn’t ‘Patterns’ the answer?′

• “Justice” and “mercy” can be found by looking at people, and in particular how people treat each other. They’re physical things, although they’re really complicated kinds of physical things.

• In particular, the kind of thing that is destroyed when you grind it down into powder.

Humans need fantasy to be human.

“Tooth fairies? Hogfathers? Little—”

Yes. As practice. You have to start out learning to believe the little lies.

“So we can believe the big ones?”

Yes. Cars. Chairs. Bicycles. That sort of thing.

“They’re not the same at all!”

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of car, one molecule of bicycle.

Susan and Death, in Hogfather by Terry Pratchett

Same thing.

• They exist in the same sense that numbers exist, or that meaningful existence exists, or that meaningfulness exists.

Once you grind the universe into powder, none of those things exists anymore.

• Justice, mercy, duty, etc are found by comparison to logical models pinned down by axioms. Getting the axioms right is damn tough, but if we have a decent set we should be able to say “If Alex kills Bob under circumstances X, this is unjust.” We can say this the same way that we can say “Two apples plus two apples is four apples.” I can’t find an atom of addition in the universe, and this doesn’t make me reject addition.

Also, the widespread convergence of theories of justice on some issues (eg. Rape is unjust.) suggests that theories of justice are attempting to use their axioms to pin down something that is already there. Moral philosophers are more likely to say “My axioms are leading me to conclude rape is a moral duty, where did I mess up?” than “My axioms are leading me to conclude rape is a moral duty, therefore it is.” This also suggests they are pinning down something real with axioms. If it was otherwise, we would expect the second conclusion.

• “theories of justice are attempting to use their axioms to pin down something that is already there”

So in other words, duty, justice, mercy—morality words—are basically logical transformations that transform the state of the universe (or a particular circumstance) into an ought statement.

Just as we derive valid conlcusions from premises using logical statements, we derive moral obligations from premises using moral statements.

The term ‘utility funcion’ seems less novel now (novel as in, a departure from traditional ethics).

• Not quite. They don’t go all the way to completing an ought statement, as this doesn’t solve the Is/​Ought dichotomy. They are logical transformations that make applying our values to the universe much easier.

“X is unjust” doesn’t quite create an ought statement of “Don’t do X”. If I place value on justice, that statement helps me evaluate X. I may decide that some other consideration trumps justice. I may decide to steal bread to feed my starving family, even if I view the theft as unjust.

• This is my view.

• In people’s brains, and in papers written by philosophy students.

• The map is not the territory. We discuss reality on many levels, but there is only one underlying level. Justice, duty and the like are abstractions; we use the same symbol in multiple places to define certain patterns. You don’t get two identical ‘happinesses’, like you get two identical atoms. It’s useful for us though, to talk about this abstraction at the macro level and not the micro, and it’s meaningful, given that we’re assuming the same axioms. I think, stuff that causes other stuff is reality, and if we assume certain axioms that correspond to reality, any new truthful statements and concepts deduced are meaningful because they also correspond to reality. Everything there is covered. Things that exists, and things we think exists.

• Mathematics is a system for building abstract statements that can be mapped to reality. The axioms of a mathematical (or other axiomatic) model define the conditions that a system (such as a pair of apples in the real universe) must satisfy in order for the abstract model to be applicable as well as providing a schema for mapping the abstract model to the concrete system.

There are other kinds of abstractions we could meaningfully talk about and they need not be defined as precisely as an axiomatic model like mathematics. An abstract model could be defined as a relationship between abstract ideas that can be mapped to a concrete system by pinning down each of its constituent abstractions to a concrete member of the system.

An abstract model may be predictive, meaning it has an if-then structure: if some relation between abstract members holds then the model predicts that some other relation will also hold. Such a predictive model may be true or false for any given concrete system that it is applied to. The standard we expect of a mathematical model is that it is valid (true for all concrete systems that it can be applied to), yet an abstract model need not meet so high a standard for it to be useful. We can imagine much fuzzier abstract models that are true only some of the time but can be useful by providing general-purpose rules that allow us to infer information about the actual state of a concrete system that matches the criteria of the model. If we know the probability of an abstract predictive model being correct we can use it wherever it is applicable to inform the construction of causal models. If we consider causal models to operate in the realm of first order logic where we can quantify over and describe relationships between the basic units of cause and effect in our universe, an abstract model lives in the realm of higher order logic and can describe the relationships between causal relations and lower order abstract models.

An abstract model need not be predictive to be useful. It may be defined to be applicable only where the entire relation it describes holds. In this case it simply acts as a reusable symbol that is useful for representing a model of a concrete system more compactly as in the way a function in a computer program factors out reusable logic, or a word in human language factors out a reusable abstract idea.

Justice and Mercy are both fuzzy abstract models. To the extent that people agree on their definitions they are meaningful for communicating a particular relationship between pinned-down abstractions. For example, Justice may be defined (simplistically) as describing a relationship between human deeds and subsequent events such that deeds labelled ‘bad’ result in punishment and deeds labelled ‘good’ result in reward. The particular deed and subsequent event as well as the definitions of good, bad, punishment and reward are all component abstractions of the abstract model called Justice which must be pinned down in a concrete system in order for the concept of Justice to be applied in that system.

Justice may also be used as a predictive model if you formulate it as a prediction from a good/​bad deed to a future reward/​punishment event (or vice versa) and it would be useful for constructing a causal model of any particular concrete system to the extent that this predicted relationship matches the actual underlying nature of that system.

Note: none of this is based on any formal study of logic outside of this Epistemology sequence so some of the terminology in this post was invented by me just now.

• Right, response to the meditation:

It gets rather difficult talking about human mental constructs, let’s begin by asking myself where would I find justice/​mercy; almost immediately (which means that I need to do some more thinking) I find that I think of human emotional constructs as a side effect of society and it’s group mindset,

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

• Susan and Death, in Hogfather by Terry Pratchett

I would find that by grinding down the universe to it’s component molecules would completely fail to find any number of things that humanity finds important; Humanity for one, to me rationalism is, above all the study of the universe and what it contains. And yet when it comes to most psychological phenomenon the models start to break down, does this mean that a more refined model would be equally unable to describe the phenomenon, not necessarily. Because as rationalists one of our key teachings is that we can observe something by studying it’s causes and effects; justice and mercy exists insofar as we as humans can comprehend their nature. They exist because we can determine the differences between a universe where they exist and the ones where they don’t exist.

-reposted in the right section

• ...Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of temperature, one molecule of pressure.

• So far we’ve talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical reference by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?

You find them inside counterfactual statements about the reactions of an implied hypothetical representative human, judging under under implied hypothetical circumstances in which they have access to all relevant knowledge. There is clearly justice if a wide variety of these hypothetical humans agree that there is, under a wide variety of these hypothetical circumstances; there is clearly not justice if they agree that there is not. If the hypothetical people disagree with each other, then the definition fails.

Talking about things like justice, mercy and duty is meaningful, but the meanings are intermediated by big, complex webs of abstractions which humans keep in their brains, and the algorithms people use to manipulate those webs. They’re unambiguous only to the extent to which people successfully keep those webs in sync with each other. In practice, our abstractions mainly work by combining bags of weak classifiers and feature-weighted similarity to positive and negative examples. This works better for cases that are similar to the training set, worse for cases that are novel and weird, and better for simpler abstractions and abstractions built on simpler constituents.

• Why couldn’t the hypothetical omniscient people inside the veil of ignorance decide that justice doesn’t exist? Or if they could, how does that paragraph go towards answering the meditation? What distinguishes them from the hypothetical death who looks through everything in the universe to try to find mercy? Aren’t you begging the question here?

• 2 Nov 2012 2:34 UTC
−2 points
Parent

Justice—The quality of being “just” or “fair”. Humans call a situation fair when everyone involved is happy afterwards, without having had their desires forcibly thwarted (e.g. being strapped into a chair and hooked into a morphine drip) along the way.

Mercy—Compassionate or kindly forbearance shown toward an offender, an enemy, or other person in one’s power. Humans choose to engage in actions characterized this way on a daily basis.

Duty—Something that one is expected or required to do by moral or legal obligation. Legal duties certainly exist; Earth is not an anarchy.

Justice, mercy, and duty are only words. The important question to ask is whether or not they are useful. I certainly think they are; I use each of those words at least once a week. Once the symbols have been replaced by substance, it is clear that we should not be looking for those things in single atoms, but very large collections of them we call “humans”, or slightly smaller (but still very large) collections we call “human brains”.

And as far as we know, atoms are not arranged in configurations that have the properties we ascribe to the tooth fairy.

• This depends a lot on your definition of meaningfulness. Justice and mercy are subjective values and not predictive or descriptive statements about reality. But in my opinion subjective values are meaningful, in fact they’re meaning itself and the only reason I consider descriptive statements about reality to be meaningful is that they help me achieve subjective values. I believe that subjective values are objectively valuable, or that the concept of objective value would make no sense, whichever you prefer. Changes in my beliefs cannot change my fundamental values and my fundamental values are motivational in a way that beliefs are not, so I consider the fundamental values to be of prior importance.

RE: Stability of logic. Logic might not be stable or it might change later, we don’t have any way of knowing and the question isn’t useful and believing in logic makes me happy and gives me rewards.

• the concept of objective value would make no sense,

if your moral values aren’t objective, why would anyone else be beholden to them? And how could they be moral if they don’t regulate others’ behaviour?

Logic might not be stable or it might change later, we don’t have any way of knowing

Why would it change, absent our changing the axioms? Do you think it is part of the universe?

• if your moral values aren’t objective, why would anyone else be beholden to them? And how could they be moral if they don’t regulate others’ behaviour?

To the first question: Possibly because your moral values arose from a process that was almost exactly the same for other individuals, and such it’s reasonable to infer that their moral values might be rather similar than completely alien?

To the second: “And how could they be (blank?) if they don’t regulate others’ behaviour?”, by which I mean, what do you mean by “moral”? What makes a value a “moral” value or not in this context?

I’m not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.

• Possibly because your moral values arose from a process that was almost exactly the same for other individuals,

Why describe them as subjecive when they are intersubjective?

I’m not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.

It would be necessary for them to be moral values and not something else, like aesthetic values. Because morality is largely to regulate interactions between individuals. That’s its job. Aesthetics is there to make things beautiful, logic is there to work things out...

• morality is largely to regulate interactions between individuals. That’s its job.

I don’t want to get into a discussion of this, but if there’s an essay-length-or-less explanation you can point to somewhere of why I ought to believe this, I’d be interested.

• I dont see that “morality is largely to regulate interactions between individuals” is contentious. Did you have another job in mind for it?

• Well, since you ask: identifying right actions.

But, as I say, I don’t want to get into a discussion of this.

I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it’s important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.

• Well, since you ask: identifying right actions.

Is that an end in itself?

I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it’s important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.

Well, the law compells those who arent compelled by exhortation. But laws need justiication.

• Is that an end in itself?

Not for me, no.

Is regulating interactions between individuals an end in itself?

• Do you think it is pointless? Do you think it is a prelude to something else>?

• I think identifying right actions can be, among other things, a prelude to acting rightly.

Is regulating interactions between individuals an end in itself?

• Is that an end in itself?

What does that concept even mean? Are you asking if there’s a moral obligation to improve one’s own understanding of morality?

Well, the law compells those who arent compelled by exhortation. But laws need justiication.

The justification for laws can be a combination of pragmatism and the values of the majority.

• Does it serve a purpose by itself? Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.

The justification for laws can be a combination of pragmatism and the values of the majority.

if the values of the majority arent justified, how does thast justify laws?

• Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.

Also, sometimes it’s a prelude to acting rightly and not acting wrongly.

• Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don’t. I do find it instrumentally valuable.

Laws are justified because subjective desires are inherently justified because they’re inherently motivational. Many people reverse the burden of proof, but in the real world it’s your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That’s the way we’re designed and there’s no getting around it. I prefer it that way and that’s its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.

• Clarification: From the perspective of a minority, the laws are unjustified. Or, they’re justified, but still undesirable. I’m not sure which. Justification is an awkward paradigm to work within because you haven’t proven that the concept makes sense and you haven’t precisely defined the concept.

• Justification is an awkward paradigm to work within because you haven’t proven that the concept makes sense.

Proof is a strong form of justification. If i don;t have justification, you don″t have proof.

• From the perspective of a minority, the laws are unjustified.

Why would the majority regard them as justifed just because they happen to have them?

• They’re justified in that there are no arguments which adequately refute them, and that they’re motivated to take those actions. There are no arguments which can refute one’s motivations because facts can only influence values via values. Motivations are what determine actions taken, not facts. That is why perfectly rational agents with identical knowledge but different values would respond differently to certain data. If a babykiller learned about a baby they would eat it, if I learned about a baby I would give it a hug.

In terms of framing, it might help you understand my perspective if you try not to think of it in terms of past atrocities. Think in terms of something more neutral. The majority wants to make a giant statue out of purple bubblegum, but the minority wants to make a statue out of blue cotton candy, for example.

• I need a definition of justified before I go on

Well, it’s ike proof, but weaker.

They’re justified in that there are no arguments which adequately refute them, and that they’re motivated to have them.

Lack of counterargument is not justification, nor is motivation from some possible irraitonal source.

The majority wants to make a giant statue out of purple bubblegum, but the minority wants to make a statue out of blue cotton candy, for example.

Or the majority want to shoot all left handed people, for example. Majority verdict isn’t even close to moral justification.

• Lack of counterargument is not justification, nor is motivation from some possible irraitonal source.

In utilitarian terms, motivation is not “I’m motivated today!”. The utilitarian meaning of motivation is that a program which displays “Hello World!” on a computer screen has for (exclusive) motivation to do the exact process which makes it display those words. The motivation of this program is imperative and ridiculously simple and very blunt—it’s the pattern we’ve built into the computer to do certain things when it gets certain electronic inputs.

Motivations are those core things which literally cause actions, whether it’s a simple reflex built into the nervous system which always causes some jolt of movement whenever a certain thing happens (such as being hit on the knee) or a very complex value system sending interfering signals within trillions of cells causing a giant animal to move one way or another depending on the resulting outcome.

• I know.

• Motivation is the only thing that causes actions, it’s the only thing that it makes sense to talk about in reference to prescriptive statements. Why do you define motivation as irrational? At worst, it should be arrational. Even then, I see motivation as its own justification and indeed the ultimate source of all justifications for belief in truth, etc. Until you can solve every paradox ever, you need to either embrace nihilism or embrace subjective value as the foundation of justification.

The majority verdict isn’t moral justification because morality is subjective. But for people within the majority, their decision makes sense. If I were in the community, I would do what they do. I believe that it would be morally right for me to do so. Values are the only source of morality that there is.

• Motivation is the only thing that causes actions, it’s the only thing that it makes sense to talk about in reference to prescriptive statements.

That doesn’t follow. If it is the only thing that causes actions, then it is relevant to why, as a matter of fact, people do what they do—but that is description, not prescription. Prescription requires extra ingredients.

Why do you define motivation as irrational?

I said that as a matter fof fact it is not necessarily rational. My grounds are that you cna’t always explain you motivations on a ratioanl basis.

Even then, I see motivation as its own justification and indeed the ultimate source of all justifications for belief in truth, etc.

It may be the source of caring about truth and rationality. That does not mke it the source of truth and rationality.

Until you can solve every paradox ever, you need to either embrace nihilism or embrace subjective value as the foundation of justification.

That doens’t follow. I could embrace non-evaluative intutions, for instance.

The majority verdict isn’t moral justification because morality is subjective.

Subjective morality cannot justify laws tha pply to eveybody.

But for people within the majority, their decision makes sense.

It may make sense as a set of personal preferences, but that doens’t justify it being binding on others.

If I were in the community, I would do what they do.

Then you would have colluded with atrocities in other historical societies.

Values are the only source of morality that there is.

Individual values do not sum to group morality.

• That doesn’t follow. If it is the only thing that causes actions, then it is relevant to why, as a matter of fact, people do what they do—but that is description, not prescription. Prescription requires extra ingredients.

In that case, prescription is impossible. Your system can’t handle the is-ought problem.

I said that as a matter fof fact it is not necessarily rational. My grounds are that you cna’t always explain you motivations on a ratioanl basis.

Values are rationality neutral. If you don’t view motivations and values as identical, explain why?

That doens’t follow. I could embrace non-evaluative intutions, for instance.

These intuitions are violated by paradoxes such as the problem of induction or the fact that logical justification is infinitely regressive (turtles all the way down). Your choice is nihilism or an arbitrary starting point, but logic isn’t a valid option.

Subjective morality cannot justify laws tha pply to eveybody.

Sure. Technically this is false if everyone is the same or very similar but I’ll let that slide. Why does this invalidate subjective morality?

It may make sense as a set of personal preferences, but that doens’t justify it being binding on others.

Why would I be motivated by someone else’s preferences? The only thing relevant to my decision is me and my preferences. The fact that this decision effects other people is irrelevant, literally every decision effects other people.

Then you would have colluded with atrocities in other historical societies.

I value human life, you are wrong.

Individual values do not sum to group morality.

Group morality does not exist.

• In that case, prescription is impossible. Your system can’t handle the is-ought problem.

Something is not impossible just because it requires extra ingredients.

Values are rationality neutral. If you don’t view motivations and values as identical, explain why?

I don’t care about the difference between irrational and arational, they’re both non-rational.

These intuitions are violated by paradoxes such as the problem of induction or the fact that logical justification is infinitely regressive (turtles all the way down).

Grounding out in an intuitionthat can’t be justified is no worse than grounding out in a value that can’t be justified.

Your choice is nihilism or an arbitrary starting point, but logic isn’t a valid option.

You are (trying to) use logic right now, How come it works for you?

Why does this invalidate subjective morality?

Because morality needs to be able to tell people why they should not always act on their first-oder impulses.

Why would I be motivated by someone else’s preferences?

I didn’t say you should.If you have morality as a higher order prefernce, you can be persuaded to override some of your first ortder preferences. In favour of morality. Which is not subjectiv,e and therefore not just someone else;s values.

The only thing relevant to my decision is me and my preferences.

You’ve admitted that prefernces can include empathy. They can include respect for universalisable moral principles too. “My prefernces” does not have to equate to “selfish preferneces”

The fact that this decision effects other people is irrelevant, literally every decision effects other people.

How does choosing vanilla over chocale chip affect other people?

I value human life, you are wrong.

You need to make up your mind whether you value human life more or less than going along with the majority.

Group morality does not exist

That claim needs justification.

• Something is not impossible just because it requires extra ingredients.

How do you generate moral principles that conflict with desire? How do you justify moral principles that don’t spring from desire? Why would anyone adopt these moral principles or care what they have to say? How do you overcome the is-ought gap?

Give me a specific example of an objective system that you think is valid and that overcomes the is-ought gap.

Because morality needs to be able to tell people why they should not always act on their first-oder impulses.

Mine can do that. Some impulses contradict other values. Some values outweigh others. Sometimes you make sacrifices now for later gains.

I don’t know why you believe morality needs to be able to restrict impulses, either. Morality is a guide to action. If that guide to action is identical to your inherent first-order impulses, all the better for you.

I didn’t say you should.If you have morality as a higher order prefernce, you can be persuaded to override some of your first ortder preferences. In favour of morality. Which is not subjectiv,e and therefore not just someone else;s values.

Let me rephrase. How can you generate motivational force from abstract principles? Why does morality matter if it has nothing to do with our values?

You’ve admitted that prefernces can include empathy. They can include respect for universalisable moral principles too. “My prefernces” does not have to equate to “selfish preferneces”

Your preferences might include this, yes. I think that would be a weird thing to have built in your preferences and that you should consider self-modifying it out. Regardless, that would be justifying a belief in a universalisable moral principle through subjective principles. You’re trying to justify that belief through nothing but logic, because that is the only way you can characterize your system as truly objective.

How does choosing vanilla over chocale chip affect other people?

There are less vanilla chips for other people. It effects your diet which effects the way you will behave. It will increase your happiness if you value vanilla chips more than chocolate ones. If someone values your happiness, they will be happy you ate vanilla chips. If someone hates when you’re happy, they will be sad.

You need to make up your mind whether you value human life more or less than going along with the majority.

I don’t value going along with the majority in and of itself. If I’m a member of the majority and I have certain values then I would act on those values, but my status as a member of the majority wouldn’t be relevant to morality.

That claim needs justification.

Sure. Pain and pleasure and value are the roots of morality. They exist only in internal experiences. My pain and your pleasure are not interchangeable because there is no big Calculating utility god in the sky to aggregate the content of our experiences. Experience is always individual and internal and value can’t exist outside of experience and morality can’t exist outside of value. The parts of your brain that make you value certain experiences are not connected to the parts of my brain that make me value certain experiences, which means the fact that your experiences aren’t mine is sufficient to refute the idea that your experiences would or should somehow motivate me in and of themselves.

• How do you generate moral principles that conflict with desire?

Did you notice my references to “firist order” and “higher order”?

How do you overcome the is-ought gap?

By using rational-should as an intermediate.

Mine can do that. Some impulses contradict other values. Some values outweigh others. Sometimes you make sacrifices now for later gains.

Sometimes you need to follow impersonal, universaliable,...maybe even objective...moral reasoning?

I don’t know why you believe morality needs to be able to restrict impulses,

i don’t know why you think “do what thou wilt” is morlaity. It would be like having a system of logic that can prove any claim.

either. Morality is a guide to action. If that guide to action is identical to your inherent first-order impulses, all the better for you.

“All the better for me” does not mean “optimal morality”. The job of logic is not to prove everything I happen to believe, and the job of morality is not to confirm all my impulses.

Let me rephrase. How can you generate motivational force from abstract principles?

Some people value reason, and the rest have value systems tweaked by the threat of punishment.

Why does morality matter if it has nothing to do with our values?

You think no one values morality?

Your preferences might include this, yes. I think that would be a weird thing to have built in your preferences and that you should consider self-modifying it out.

What’s weird? Empathy? Morality? Ratioanlity?

You’re trying to justify that belief through nothing but logic, because that is the only way you can characterize your system as truly objective.

You say that like its a bad thing.

There are less vanilla chips for other people

Not necessarily. There might be a surplus.

But if you want to say that everything effects others, albeit to a ti y extent, then it follows that everything is a tiny bit moral.

I don’t value going along with the majority in and of itself.

You previouly made some statements that sounded a lot like that.

Sure. Pain and pleasure and value are the roots of morality.

That statement needs some justification. Is it better to do good things voluntarily, or because you are forced to?

Experience is always individual and internal and value can’t exist outside of experience and morality can’t exist outside of value.

OK, I though ti was something like that. The things is that subjects can have values which are inherently interpersonal and even objective...things like empathy and rationality. So “value held by a subject” does not imply “selfish value”.

The parts of your brain that make you value certain experiences are not connected to the parts of my brain that make me value certain experiences, which means the fact that your experiences aren’t mine is sufficient to refute the idea that your experiences would or should somehow motivate me in and of themselves.

Yet agian, objective morality is not a case of one subject being motivated by another subjects values. Objectivity is not achieved by swapping subjects.

• Did you notice my references to “firist order” and “higher order”?

This is a black box. Explain what they mean and how you generate the connection between the two.

By using rational-should as an intermediate.

You claim that a rational-should exists. Prove it.

Sometimes you need to follow impersonal, universaliable,...maybe even objective...moral reasoning?

Using objective principles as a tool to evaluate tradeoffs between subjective values is not the same as using objective principles to produce moral truths.

i don’t know why you think “do what thou wilt” is morlaity. It would be like having a system of logic that can prove any claim.

That’s not my definition of morality, it’s the conclusion I end up with. Your analogy doesn’t seem valid to me because I don’t conclude that all moral claims are equal but that all desires are good. Repressing desires or failing to achieve desires is bad. Additionally, its clear to me why a logical system that proves everything is good is bad, but why would a moral system that did the same be invalid?

“All the better for me” does not mean “optimal morality”. The job of logic is not to prove everything I happen to believe, and the job of morality is not to confirm all my impulses.

I agree. I didn’t claim either of those things. Morality doesn’t have a job outside of distinguishing between right and wrong.

What’s weird? Empathy? Morality? Ratioanlity?

The idea that all principles you act upon must be universalizable. It’s bad because individuals are different and should act differently. The principle I defend is a universalizable one, that individuals should do what they want. The difference between mine and yours is that mine is broad and all people are happy when its applied to their case, but yours is narrow and exclusive and egocentric because it neglects differences in individual values, or holds those differences to be morally irrelevant.

Not necessarily. There might be a surplus.

But if you want to say that everything effects others, albeit to a ti y extent, then it follows that everything is a tiny bit moral.

Subtraction, have you heard of it?

Some things are neutral even though they effect others.

That statement needs some justification. Is it better to do good things voluntarily, or because you are forced to?

Voluntarily, because that means you’re acting on your values.

OK, I though ti was something like that. The things is that subjects can have values which are inherently interpersonal and even objective...things like empathy and rationality. So “value held by a subject” does not imply “selfish value”.

If I valued rationality, why would that result in specific moral decrees? Value held by a subject doesn’t imply selfish value, but it does imply that the values of others are only relevant to my morality insofar as I empathize with those others.

Yet agian, objective morality is not a case of one subject being motivated by another subjects values. Objectivity is not achieved by swapping subjects.

“Objectivity” in ethics is achieved by abandoning individual values and beliefs and trying to produce statements which would be valued and believed by everyone. That’s stupid because we can never escape the locus of the self and because morality emerges from internal processes and neglecting those internal processes means that there is zero foundation for any sort of morality. I’m saying that morality is only accessible internally, and that the things which produce morality are internal subjective beliefs.

If you continue to disagree, I suggest we start over. Let me know and I’ll post an argument that I used last year in debate. I feel like starting over would clarify things a lot because we’re getting muddled down in a line-by-line back-and-forth hyperspecific conversation here.

• This is a black box. Explain what [first order and higher order] mean and how you generate the connection between the two.

Usual meaning in this type of disucssion.

You claim that a rational-should exists. Prove it.

If I can prove anything to you, you are already running on rational_should.

Using objective principles as a tool to evaluate tradeoffs between subjective values is not the same as using objective principles to produce moral truths.

Why not?

That’s not my definition of morality, it’s the conclusion I end up with.

That doens’t help. It;s not morality whether it’s assumed or concluded.

The idea that all principles you act upon must be [is weird]

It’s bad because individuals are different and should act differently.

Individuals are different and would act differntly. You are arguing as though people should never do anythng unless it is morally obligated, as though moral rules are all encompassing. I never said that. Morality does not need to detemine evey action any more than civil law does.

The principle I defend is a universalizable one, that individuals should do what they want.

That isn’t universalisable because you don;t want to be murdered. The correct form is “individuals should do what they want unless it harms another”.

The difference between mine and yours is that mine is broad and all people are happy when its applied to their case,

We don’t have it. if people wanted your principle, they would abolish all laws.

but yours is narrow and exclusive and egocentric

!!!

If I valued rationality, why would that result in specific moral decrees?

Look at examples of people arguing about morality.

ETA: Better restrict that to liberals.

There’s plenty about, even on this site.

Value held by a subject doesn’t imply selfish value, but it does imply that the values of others are only relevant to my morality insofar as I empathize with those others.

Nope. Rationality too.

“Objectivity” in ethics is achieved by abandoning individual values and beliefs

Of course not. It is a perfectly acceptable principle that people should be allowed to realise their values so long as they do not harm others. Where do you ge these ideas?

and trying to produce statements which would be valued and believed by everyone.

Just everyone rational. The police are there for a reason

That’s stupid because we can never escape the locus of the self and because morality emerges from internal processes

Yet again: we can internally value what is objective and impartial. “In me” doesn’t imply “for me”.

and neglecting those internal processes means that there is zero foundation for any sort of morality.

“Neglect” is your straw man.

I’m saying that morality is only accessible internally, and that the things which produce morality are internal subjective beliefs.

Yet again: “In me” doesn’t imply “for me”.

If you continue to disagree, I suggest we start over. Let me know and I’ll post an argument that I used last year in debate. I feel like starting over would clarify things a lot because we’re getting muddled down in a line-by-line back-and-forth hyperspecific conversation here.

If you like.

• Laws are justified because subjective desires are inherently justified because they’re inherently motivational.

What you need to justfy is imprisoning someone for offending against values they don’t necessarily subsribe to. That you are motivated by your values, and the criminal by theirs, doens’t give you the right to jail them.

• I actually see that as counter-intuitive.

“Morality” is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think “greater total utility over multiple agents, whose value systems (utility functions) may vary”. Morality seems largely about taking actions and making decisions which achieve greater utility.

• I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don’t inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.

I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they’ve overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.

• Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).

• In my view there’s a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn’t relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.

• My computer won’t load the website because it’s apparently having issues with flash, can you please summarize? If you’re just making a distinction between yourself and your beliefs, sure, I’ll concede that. I was a bit sloppy with my terminology there.

• Its not “My beliefs” either.” justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows.”

• Okay. I think I’ve explained the justification then. Specific moral systems aren’t necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. “My values tell me X, therefore X is moral” is the form of justification that I’ve been defending.

• Yet again, you run into the problem that you need it to be wrong for other people to murder you, which you can’t justify with your values alone.

• No I don’t. I need to be stronger than the people who want to murder me, or to live in a society that deters murder. If someone wants to murder me, it’s probably not the best strategy to start trying to convince them that they’re being immoral.

You’re making an argumentum ad consequentum. You don’t decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards. Just because you don’t like the type of system that morality leads to overall doesn’t mean that you’re justified in ignoring other moral arguments.

The benefit of my system is that it’s right for me to murder people if I want to murder them. This means I can do things like self defense or killing Nazis and pedophiles with minimal moral damage. This isn’t a reason to support my system, but it is kind of neat.

• No I don’t. I need to be stronger than the people who want to murder me,

That’s giving up on morality not defending subjective morality.

or to live in a society that deters murder.

Same problem. That’s either group morality or non morality.

If someone wants to murder me, it’s probably not the best strategy to start trying to convince them that they’re being immoral.

I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.

You’re making an argumentum ad consequentum. You don’t decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards.

I am starting wth what kind of morality it would be adequate to have. If you can’t bang in a nail with it, it isn’t a hammer.

Just because you don’t like the type of system that morality leads to overall

Where on eath did I say that?

The benefit of my system is that it’s right for me to murder people if I want to murder them.

That’s not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for “i can do whatever I want regardless”.

This means I can do things like self defense

Justifiable self defense is not murder. You seem to have confused ethical objectiv ism (morality is not just personal preference) with ethical absolutism (moral principles have no exceptions). Read yer wikipedia!

• That’s giving up on morality not defending subjective morality.

Morality is a guide for your own actions, not a guide for getting people to do what you want.

Same problem. That’s either group morality or non morality.

Rational self interested individuals decide to create a police force.

Argumentum ad consequentums are still invalid.

I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.

Sure, but morality needs to have motivational force or its useless and stupid. Why should I care? Why should the burglar? If you’re going to keep insisting that morality is what’s preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.

I am starting wth what kind of morality it would be adequate to have. If you can’t bang in a nail with it, it isn’t a hammer.

This is not how metaethics works. You don’t get to start with a predefined notion of adequate. That’s the opposite of objectivity. By neglecting metaethics, you’re defending a model that’s just as subjective as mine, except that you don’t acknowledge that and you seek to vilify those who don’t share your preferences.

Where on eath did I say that?

You’re arguing that subjective morality can’t be right because it would lead to conclusions you find undesirable, like random murders.

That’s not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for “i can do whatever I want regardless”.

Stop muddling the debate with unjustified assumptions about what morality is for. If you want to talk about something else, fine. My definition of morality is that morality is what tells individuals what they should and should not do. That’s all I intend to talk about.

You’ve conceded numerous things in this conversation, also. I’m done arguing with you because you’re ignoring any point that you find inconvenient to your position and because you haven’t shown that you’re rational enough to escape your dogma.

• Morality is a guide for your own actions,

No, it is largely about regulating interactions such as rape theft and murder.

not a guide for getting people to do what you want.

I never said morality is to make others do what I want. That is persistent straw man on your part

Rational self interested individuals decide to create a police force.

So?

Argumentum ad consequentums are still invalid.

“It’s not a hammer if it can’t bang in nail” isn’t invalid.

Sure, but morality needs to have motivational force or its useless and stupid. Why should I care?

If your ar rational you will care abotu raitonality based morality. If you are not...what are you doing on LW?

Why should the burglar? If you’re going to keep insisting that morality is what’s preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.

The motivation to be rational is a motivation. I didn’t say non-motivations override motivations. Higher order and lower order, remember.

This is not how metaethics works. You don’t get to start with a predefined notion of adequate.

Why not? I can see apriori what would make a hammer adequate.

You’re arguing that subjective morality can’t be right because it would lead to conclusions you find undesirable, like random murders.

Conclusions that just about anyne would find undersirable. Objection to random murder is not some weird pecadillo of mine.

Stop muddling the debate with unjustified assumptions about what morality is for.

Calling something unjustified doens’t prove antyhing.

My definition of morality is that morality is what tells individuals what they should and should not do.

What’s the differnce? If you should not do a murder (your defintiion), then a potential interaction has been regulated (my version).;

You’ve conceded numerous things in this conversation, also. I’m done arguing with you because you’re ignoring any point that you find inconvenient to your position

Please list them.

and because you haven’t shown that you’re rational enough to escape your dogma.

What dogma?

• No, it is largely about regulating interactions such as rape theft and murder.

This is a subset of my possible individual actions. Every interaction is an action.

Morality is not political, which is what you’re making it into. Morality is about right and wrong, and that’s all.

I never said morality is to make others do what I want. That is persistent straw man on your part

You’re using morality for more than individual actions. Therefore, you’re using it for other people’s actions, for persuading them to do what you want to do. Otherwise, your attempt to distinguish your view from mine fails.

“It’s not a hammer if it can’t bang in nail” isn’t invalid.

Then you’re using a different definition of morality which has more constraints than my definition. My definition is that morality is anything that tells an individual which actions should or should not be taken, and that no other requirements are necessary for morality to exist. If your conception of morality guides individual actions as well, but also has additional requirements, I’m contending that your additional requirements have no valid metaphysical foundation.

The motivation to be rational is a motivation. I didn’t say non-motivations override motivations. Higher order and lower order, remember.

Rationality is not a motivation, it is value-neutral.

Why not? I can see apriori what would make a hammer adequate.

You can start with a predefined notion of adequate, but only if you justify it explicitly.

What moral system do you defend? How does rationality result in moral principles? Can you give me an example?

Conclusions that just about anyne would find undersirable. Objection to random murder is not some weird pecadillo of mine.

Not relevant. People are stupid. Argumentum ad consequentums are logically invalid. Use Wikipedia if you doubt this.

Calling something unjustified doens’t prove antyhing.

If your assumptions were justified, I missed it. Please justify them for me.

What’s the differnce? If you should not do a murder (your defintiion), then a potential interaction has been regulated (my version).;

Our definitions overlap in some instances but aren’t identical. You add constraints, such as the idea that any moral system which justifies murder is not a valid moral system. Yours is also narrower than mine because mine holds that morality exists even in the context of wholly isolated individuals, whereas yours says morality is about interpersonal interactions.

Please list them.

I was mistaken because I hadn’t seen your other comment. I read the comments out of order. My apologies.

What dogma?

You’re arguing from definitions instead of showing the reasoning process which starts with rational principles and ends up with moral principles.

• Every interaction is an action.

It is not ratinal to decide actions which are inteactions on the preferneces of one party alone.

Morality is not political

Weren’t you saying that the majority decide what is moral?

You’re using morality for more than individual actions.

Arent you?

Therefore, you’re using it for other people’s actions, for persuading them to do what you want to do.

Everybody is using it for their and everybody elses actions. I play no central role.

If your conception of morality guides individual actions as well, but also has additional requirements, I’m contending that your additional requirements have no valid metaphysical foundation.

That depends on whether or not your “individual actions” inlcude interacitons. if they do, the interests of the other parties need to be taken into account.

Rationality is not a motivation, it is value-neutral.

How does anyone end up raitional if no-one is motivated to be? Are you quite sure you haven’t confused

“rationality is value neutral because if you don’t get any values out of it your don’t put into it”

with

“No one would ever value rationality”

You can start with a predefined notion of adequate, but only if you justify it explicitly.

I don’t have to justify common defintions.

What moral system do you defend?

Where did I say I was defending one? I said subjectivism doen’t work.

Argumentum ad consequentums are logically invalid.

You cannot logically conclude that something exists in objective reality because you like its consequences. But morality doens’t exist in objective reality. it is a human creation, and humans are entitled to reject versions of ti that don’t work becaue they dont work.

If your assumptions were justified, I missed it. Please justify them for me.

The burden is on you to explain how your definition “morality is about right and wrong” is different from mine: “morality is about the requation of conduct”.

Our definitions overlap in some instances but aren’t identical. You add constraints, such as the idea that any moral system which justifies murder is not a valid moral system.

It obviiusly isn’t. If our definitions differ, mine is right.

Yours is also narrower than mine because mine holds that morality exists even in the context of wholly isolated individuals, whereas yours says morality is about interpersonal interactions.

I said “largely”.

You’re arguing from definitions

You say that like its a bad thing.

instead of showing the reasoning process which starts with rational principles and ends up with moral principles.

Why would I need to do that to show that subjectivism is wrong?

• I don’t want to spend any more time on this. I’m done.

• Your usage of the words “subjective” and “objective” is confusing.

Utilitarianism doesn’t forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize “morality” (total sum utility).

It is “objective” in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also “objective” in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.

However, it is also “subjective” in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don’t, but that’s a theoretical nitpick).

Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for, AFAIK.

• I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.

• Yeah.

When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.

I’m currently convinced that there’s at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.

• Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for,

Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?

• No form of the official theory in the papers I read, at the very least.

Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.

I’ve never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.

• No form of the official theory in the papers I read, at the very least.

Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.

It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?

• No, they don’t have any major differences in utilitarian systems.

It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.

Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.

If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.

• If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent,

Isn’t that decision theory?

• I share this view. When I appear to forfeit some utility in favor of someone else, it’s because I’m actually maximizing my own utility by deriving some from the knowledge that I’m improving the utility of other agents.

Other agents’s utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/​profess the belief of an “innate goodness of humanity” are mind-projecting their own value-of-others’-utility.

Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It’s also possible that this is a purely environment-learned value that becomes “terminal” in the process of being trained into the brain’s reward centers due to its instrumental value in many situations.

• Because morality is largely to regulate interactions between individuals. That’s its job.

You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.

Morality is a useful tool to regulate interactions between individuals. There are efforts to make it a better tool for that purpose. That does not mean that morality should be used to regulate interactions.

• You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.

Human artifacts are generally created to do jobs, eg hammers

Morality is a useful tool

Tool. Like i said.

That does not mean that morality should be used to regulate interactions.

Does that mean you have a better tool in mind, or that interaction don’t need regulation?

• If I put a hammer under a table to keep the table from wobbling, am I using a tool or not? If the hammer is the only object within range that is the right size for the table, and there is no task which requires a weighted lever, is the hammer intended to balance the table simply by virtue of being the best tool for the job?

Fit-for-task is a different quality than purpose. Hammers are useful tools to drive nails, but poor tools for determining what nails should be driven. There are many nails that should not be driven, despite the presence of hammers.

• If I put a hammer under a table to keep the table from wobbling, am I using a tool or not?

f you can’t bang in nails with it, it isnt a hammer. What you can do with it isn’t relevant.

There are many nails that should not be driven, despite the presence of hammers.

???

So we can judge things morally wrong, because we have a tool to do the job, but we shouldn’t in many cases, because...? (And what kind of “shouldn’t” is that?)

• If you can’t bang in nails with it, it isnt a hammer. What you can do with it isn’t relevant.

By that, the absence of nails makes the weighted lever not a hammer. I think that hammerness is intrinsic and not based on the presence of nails; likewise morality can exist when there is only one active moral agent.

• The metaphor was that you could, in principle, drive nails literally everywhere you can see, including in your brain. Will you agree that one should not drive nails literally everywhere, but only in select locations, using the right type of nail for the right location? If you don’t, this part of the conversation is not salvageable.

• What is that supposed to be analgous to? If you have a workable system of ethics, then it doens’t make judgments willy nilly, anymore than a workable system of logic allows quodlibet.

• The metaphor was that you could, in principle, make rules and laws for literally any possible action, including living. Will you agree that one should not make fixed rules for literally all actions, but only for select high-negative-impact ones, using the right type of rule for the right action?

(Edited for explicit analogy.)

Basically, it’s not because you have a morality (hammer) that happens to be convenient for making laws and rules of interactions (balancing the table) that morality is necessarily the best and intended tool for making rules and that morality itself tells you what you should make laws about or that you even should make laws in the first place.

• Moral rules and legal laws aren’t the same thing. Modern socities don’t legislate against adultery, although they may consider it against the moral rules.

If you are going to override a moral rule, (ie neither punish nor even disaprove of) an action, what would you override it in favour of? What would count more?

• I would refuse to allow moral judgement on things which lie outside of the realm of appropriate morality. Modern societies don’t legislate against adultery because consensual sex is amoral. Using moral guidelines to determine which people are allowed to have consensual sex is like using a hammer to open a window.

• Oh, that was your concern. I has no bearing on what I was saying.

• I don’t see where I’ve implied that one would override a moral rule. What I’m saying is that most current moral systems are not good enough to even make rational rules about some types of actions in the first place, and that in the long run we would regret doing so after doing some metaethics.

Uncertainty and the lack of reliability of our own minds and decision systems are key points of the above.

• Why describe them as subjecive when they are intersubjective?

Because they’re not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans? And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?

I have warning lights that there’s an argument about definitions here.

• Because they’re not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans?

That would make them not-objective. Subjective and intersubjective remain as options.

And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?

Then, again, why would anyone else be beholden to my values?

• Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?

The free-loader problem is an obvious downside of the above simplification, but that and other issues don’t seem to be part of the present discussion.

• Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

That doesn’t make them beholden—obligated. They can opt not to play that game. They can opt not to vvalue winning.

If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?

Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.

• Could you taboo “beholden” in that first? I’m not sure the “feeling of moral duty borned from guilt” I associate with the word “obligated” is quite what you have in mind.

They can opt not to play that game. They can opt not to value winning.

Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

In other words, you just didn’t truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

(sorry if I’m arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent’s utility to its maximum possible value if there exists a maximum for that agent’s function)

• I wouldn’t let my values be changed if doing so would thwart my current values. I think you’re contending that the utopia would satisfy my current values better than the status quo would, though.

In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don’t have very strong ones but I think they’re in here somewhere and for some things). You would call this a hidden utility function, I don’t think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.

Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.

• Do you think the utopia is feasible?

• Naw. But even if it was, if I placed value on maintaining my current values to a high degree, I wouldn’t modify.

• Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

Games emerge where people have things other people value. If someone doens’t value those sorts of things, they are not going to game-play.

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

I don’t see where higher-tier functions come in.

You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That’s a tall order.

• I love this post, and will be recommending it.

Speaking as a non-mathematician I think I would have tried to express ‘there’s only one chain’ by saying something like ‘all numbers can be reached by a finite amount of repetititions of considering the successor of a number you’ve already considered, starting from zero’.

• We can try to write that down as “For all x, there is an n such that x = S(S(...S(0)...)) repeated n times.”

The two problems that we run into here are: first, that repeating S n times isn’t something we know how to do in first-order logic: we have to say that there exists a sequence of repetitions, which requires quantifying over a set. Second, it’s not clear what sort of thing “n” is. It’s a number, obviously, but we haven’t pinned down what we think numbers are yet, and this statement becomes awkward if n is an element of some other chain that we’re trying to say doesn’t exist.

• repeating S n times isn’t something we know how to do in first-order logic

Why not? Repeating S n times is just addition, and addition is defined in the peano first order logic axioms. I just took these from my textbook:

∀y.plus(0,y,y)

∀x.∀y.∀z.(plus(x,y,z) ⇒ plus(s(x),y,s(z)))

∀x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))

I’ve also seen addition defined recursively somehow, so each step it subtracted 1 from the second number and added 1 to the first number, until the second number was equal to zero. Something like this:

∀x.∀y.∀z.∀w.(plus(x,y,z) ⇒ plus(s(x),w,z) ∧ same(s(w),y))

From this you could define subtraction in a similar way, and then state that all numbers subtracted from themselves, must equal 0. This would rule out nonstandard numbers.

• From this you could define subtraction in a similar way, and then state that all numbers subtracted from themselves, must equal 0. This would rule out nonstandard numbers.

That will not rule out nonstandard models of the first-order Peano axioms. If a subtraction predicate is defined by:

∀x. sub(x,0,x)

∀x.∀y.∀z. sub(x,y,z) ⇒ sub(s(x),s(y),z)

then you don’t need to add that all numbers subtracted from themselves, must equal 0. ∀x.sub(x,x,0) is already a theorem, which can be proved almost immediately from those axioms and the first-order induction schema. Being a theorem, it is true in all models. Every nonstandard element of a nonstandard model, subtracted from itself, gives 0.

It may seem odd that a statement proved by induction is necessarily true even of those elements of a non-standard model that, in our mental picture of them, cannot be reached by counting upwards from zero, but the induction axiom scheme explicitly says just that: if P(0) and ∀x.(P(x) ⇒ P(s(x))) then ∀x.P(x). The conclusion is not limited to standard values of x, because the language cannot distinguish standard from non-standard values.

• If you already have an axiom of induction then you’ve already ruled out nonstandard numbers and this isn’t an issue. I was trying to show that without the second order logic axiom of induction, you can rule out nonstandard numbers.

The recursive subtract predicate will never reach zero on a nonstandard number, therefore it can not be true that n-n=0.

• If you already have an axiom of induction then you’ve already ruled out nonstandard numbers and this isn’t an issue. I was trying to show that without the second order logic axiom of induction, you can rule out nonstandard numbers.

Without second-order logic, you cannot rule out nonstandard numbers. As Epictetus mentioned, the Lowenheim-Skolem Theorem implies that if there is a model of first-order Peano arithmetic, there are models of all infinite cardinalities.

You have to distinguish the axioms from the meanings one intuitively attaches to them. We have an intuitive idea of the natural numbers, and the Peano axioms (including the induction schema) seem to be true of them. However, ZFC set theory (for example) provably contains models of those axioms other than the natural numbers of our intuition.

The induction schema seems to formalise our notion that every natural number is reachable by counting up from zero. But look more closely and you can intuitively read it like this: if you can prove that P is true of every number you can reach by counting, then P is true of every number (even those you can’t reach by counting, if there are any).

The predicate “is a standard number” would be a counterexample to that, but the induction schema is asserted only for formulas P expressible in the language of Peano arithmetic. Given the existence of non-standard models, the fact that “is a standard number” does not satisfy the induction schema demonstrates that it is not definable in the language.

The subtraction predicate provably satisfies ∀n. n-n = 0. Therefore every model of the Peano axioms satisfies that—it would not be a model if it did not.

(Technical remark: I should not have added “sub” as a new symbol, which creates a different language, an extension of Peano arithmetic. Instead, “sub(x,y,z) should be introduced as a metalinguistic abbreviation for y+z=x, which is a formula of Peano arithmetic. One can still prove ∀x. sub(x,x,0), and without even using induction. Expanding the abbreviation gives x+0 = x, which is one of the axioms, e.g. as listed here.)

• I refer you to the Lowenheim-Skolem Theorem:

Every (countable) first-order theory that has an infinite model, has a model of size k for every infinite cardinal k. You cannot use first-order logic to exclude non-standard numbers unless you want to abandon infinite models altogether.

• Repeating S n times is not addition: addition is the thing defined by those axioms, no more, and no less. You can prove the statements:

∀x. plus(x, 1, S(x))

∀x. plus(x, 2, S(S(x)))

∀x. plus(x, 3, S(S(S(x))))

and so on, but you can’t write “∀x. plus(x, n, S(S(...n...S(x))))” because that doesn’t make any sense. Neither can you prove “For every x, x+n is reached from x by applying S to x some number of times” because we don’t have a way to say that formally.

From outside the Peano Axioms, where we have our own notion of “number”, we can say that “Adding N to x is the same as taking the successor of x N times”, where N is a real-honest-to-god-natural-number. But even from the outside of the Peano Axioms, we cannot convince the Peano Axioms that there is no number called “pi”. If pi happens to exist in our model, then all the values …, pi-2, pi-1, pi, pi+1, pi+2, … exist, and together they can be used to satisfy any theorem about the natural numbers you concoct. (For instance, sub(pi, pi, 0) is a true statement about subtraction, so the statement “∀x. sub(x, x, 0)” can be proven but does not rule out pi.)

• “For every x, x+n is reached from x by applying S to x some number of times” because we don’t have a way to say that formally.

But that’s what I’m trying to say. To say n number of times, you start with n and subtract 1 each time until it equals zero. So for addition, 2+3 is equal to 3+2, is equal to 4+1, is equal to 5+0. For subtraction you do the opposite and subtract one from the left number too each time.

If no number of subtract 1′s cause it to equal 0, then it can’t be a number.

• I know that’s what you’re trying to say because I would like to be able to say that, too. But here’s the problems we run into.

1. Try writing down “For all x, some number of subtract 1′s cause it to equal 0”. We can write the “∀x. ∃y. F(x,y) = 0″ but in place of F(x,y) we want “y iterations of subtract 1′s from x”. This is not something we could write down in first-order logic.

2. We could write down sub(x,y,0) (in your notation) in place of F(x,y)=0 on the grounds that it ought to mean the same thing as “y iterations of subtract 1′s from x cause it to equal 0”. Unfortunately, it doesn’t actually mean that because even in the model where pi is a number, the resulting axiom “∀x. ∃y. sub(x,y,0)” is true. If x=pi, we just set y=pi as well.

The best you can do is to add an infinitely long axiom “x=0 or x = S(0) or x = S(S(0)) or x = S(S(S(0))) or …”

• I think I’m starting to get it. That there is no property that a natural number could be defined as having, that a infinite chain couldn’t also satisfy in theory.

That’s really disappointing. I took a course on logic and the most inspiring moment was when the professor wrote down the axioms of peano arithmitic. They are more or less formalizations of all the stuff we learned about numbers in grade school. It was cool that you could just write down what you are talking about formally and use pure logic to prove any theorem with them. It’s sad that it’s so limited you can’t even express numbers.

• “Why does 2+2 come out the same way each time?”

Thoughts that seem relevant:

1. Addition is well defined, that is if x=x’ and y=y’ then x+y = x’+y’. Not every computable transformation has this property. Consider the non-well-defined function <+> on fractions given by a/​b <+> c/​d = (a+c)/​(b+d) We know that 39 = 13 and 25 = 410 but 719 != 38.

2. We have the Church-Rosser Theorem http://​​en.wikipedia.org/​​wiki/​​Church%E2%80%93Rosser_theorem as a sort of guarantee (in the lambda calculus) that if I compute one way and you compute another, then we can eventually reach common ground.

3. If we consider “a logic” to be a set of rules for manipulaing strings, then we can come up with some axioms for classical logic that characterize it uniquely. That is to say, we can logically pinpoint classical logic (say, with the axioms of boolean algebra) just like we can we can logically pinpoint the natural numbers (with the peano axioms).

• I’d say that your “non-well-defined function on fractions” isn’t actually a function on fractions at all; it’s a function on fractional expressions that fails to define a function on fractions.

• Fair enough. We could have “number expressions” which denote the same number, like “ssss0“, “4”, “2+2”, “2*2”. Then the question of well-definedness is whether our method of computing addition gives the same result for each of these different number expressions.

• Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies.

I expected at this point the mathematician to spell out the connection to the earlier discussion of defining addition abstractly—“for every relation R that works exactly like addition...”

• x + Sy = Sz -

That looks a bit odd.

• I think the idea is that one speaker got cut off by the other after having said “x+Sy=Sz”.

• If this were Wikipedia, someone would write a rant about the importance of using typographically correct characters for the hyphen, the minus sign, the en dash, and the em dash ( - − – and — respectively).

• Yeah, I understood that after about 10 seconds of confusion, which seems unnecessary.

• I’m new here, so watch your toes...

As has been mentioned or alluded to, the underlying premise may well be flawed. By considerable extrapolation, I infer that the unstated intent is to find a reliable method for comprehending mathematics, starting with natural numbers, such that an algorithm can be created that consistently arrives at the most rational answer, or set of answers, to any problem.

Everyone reading this has had more than a little training in mathematics. Permit me to digress to ensure everyone recalls a few facts that may not be sufficiently appreciated. Our general education is the only substantive difference between Homo Sapiens today and Homo Sapiens 200,000 years ago.

With each generation the early education of our offspring includes increasingly sophisticated concepts. These are internalized as reliable, even if the underlying reasons have been treated very lightly or not at all. Our ability to use and record abstract symbols appeared at about the same time as farming. The concept that “1” stood for a single object and “2″ represented the concept of two objects was establish along with a host of other conceptual constructs. Through the ensuing millennia we now have an advanced symbology that enables us to contemplate very complex problems.

The digression is to point out that very complex concepts, such as human logic, require a complex symbology. I struggle with understanding how contemplating a simple artificially constrained problem about natural numbers helps me to understand how to think rationally or advance the state of the art. The example and human rationality are two very different classes of problem. Hopefully someone can enlighten me.

There are some very interesting base alternatives that seem to me to be better suited to a discussion of human rationality. Examining the shape of the Pareto front generated by PIBEA (Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems) runs for various real-world variables would facilitate discussions around how each of us weights each variable and what conditional variables change the weight (e.g., urgency).

Again, I intend no offense. I am seeking understanding. Bear in mind that my background is in application of advanced algorithms in real-world situations.

• Due to all this talk about logic I’ve decided to take a little closer look at Goedel’s theorems and related issues, and found this nice LW post that did a really good job dispelling confusion about completeness, incompleteness, SOL semantics etc.: Completeness, incompleteness, and what it all means: first versus second order logic

If there’s anything else along these lines to be found here on LW—or for that matter, anywhere, I’m all ears.

• 1 Nov 2012 17:31 UTC
0 points

Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.

A hidden meditation, methinks.

• try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable—why is logic stable? Of course I can’t imagine it being any other way, but that’s not an explanation.

Do you have an answer which will be revealed in a later post?

• Every number has a successor. If two numbers have the same successor, they are the same number. There’s a number 0, which is the only number that is not the successor of any other number. And every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers. In combination, those premises narrow down a single model in mathematical space, up to isomorphism. If you show me two models matching these requirements, I can perfectly map the objects and successor relations in them

The property “is the only number which is not the successor of any number” manifestly is false for every Sx.

There is a number ′ (spoken “prime”). The sucessor of ′ is ‘. ’ and ′ are the same number.

There is a number A. Every property which is true of 0, and for which P(Sx) is true whenever P(x) is true, is true of A. The successor of A is B. The successor of B is C. The successor of C is A.

Both of these can be eliminated by adding a property P1: EDIT for correctness: It is true of a number y that if Sx=y, then y≠x; It is further true of the number Sy that if Sx=y, Sy≠x. &etc But P1 was not required in your description of numbers.

There is also an infinite series, … −3,-2,-1,o,1,2,3… which also shares all of the properties zero for which P(Sx) is true whenever P(x) is true.

I can’t easily find a way to exclude any of the infinite chains using the axioms described here.

• But isn’t P1 required by “And every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers.”?

• No, Sx=x is not prohibited by that.

I also phrased the meta-property wrong; I meant to allow it to explode from zero and will edit to clarify.

• Actually, it is prohibited. Let P(x) be the property x ≠ S(x). I will now demonstrate that every natural number x has the property P.

0 ≠ S(0), because 0 is not the successor of any natural number (including itself).

Suppose, for an arbitrary natural number, k, we know that k ≠ S(k). Suppose now that S(k) = S(S(k)). Since k and S(k) have the same successor, we know that k = S(k). But this contradicts our original assumption.

So, for any natural number k, we know that (k ≠ S(k)) implies (S(k) ≠ S(S(k))), or, to rephrase, if P(k) is true, P(S(k)) must be true as well.

Every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers.

So, all models satisfying Eliezer’s axioms (actually, Peano’s) do indeed have the property x ≠ S(x), no matter which natural number x you are considering.

Edits: Parentheses, grammar.

• Beat me to the punch! And with clearer formatting, too.

• That breaks the single chain; what prohibits the finite loop, or the infinite chain

Consider the infinite set of properties of the form x ≠ Sn(x)… and every finite chain is also broken, through a similar method.

However, I don’t see what property is true of zero and all of its successors that cannot be true of an infinite chain, every member of which is a successor to a different number.

I would also ask if it is any different to make the change “IFF two numbers have the same successor, they are the same number.”

• Let the property P be, “is a standard number”.

Then P would be true of 0, true along the successor-chain of 0, and false at a separated infinite successor-chain.

Thus P would be a property that was true of 0 and true of the successor of every object of which it was true, yet not true of all numbers.

This would contradict the second-order axiom, so this must not be a model of second-order PA.

• And why aren’t −1, every predecessors of −1, and every successor of −1 a standard number? Or are we simply using second-order logic to declare in a roundabout way that there is exactly one series of numbers?

Let us postulate a predecessor function: Iff a number x is a successor of another number, then the predecessor of x is the number which has x as a successor. The predecessor function retains every property that is defined to be retained by the successor function. The alternate chain C’ has o: o has EVERY property of 0 (including null properties) except that it is the successor to -a, and the successor to o is a. Those two properties are not true of S(x) given that they are true of x. every successor of o in the chain meets the definition of number; I can’t find a property that is not true of -a and the predecessors of a but that is true for every natural number.

• `P(k) := R(k, 0)`

`R(k, n) := (k = n) ∨ R(k, Sn)`

P(0) and P(k) ⇒ P(Sk) can be easily proved, so

`P(k) for all k`

Or something like that.

• In that case, P(o) is true, and P(k)->P(Pk) is equally provable.

• No...?

The above basically says that `P(k)` is “is within the successor chain of 0”. Note that the base case is `k = 0`, not `k = o`. Anyway, the point is, since such a property is possible (including only the objects that are some n-successor of 0), the axiom of induction implies that numbers that follow 0 are the only numbers.

ETA: Reading your parent post again, the problem is it’s impossible to have an `o` that has every property `0` does. As a demonstration, `Z(k) := (k = 0)` is a valid property. It’s true only of `0`. `R(k, 0)` is similarly a property that is only true of `0`, or `SS..0`.

• I’m having a hard time parsing what you are trying to say here.

That breaks the single chain; what prohibits the finite loop, or the infinite chain?

A finite loop of size one is prohibited by ∀x (x ≠ S(x)); this is provable from the Peano axioms (and thus holds true in all models of the Peano axioms), a finite loop of size two is prohibited by ∀x (x ≠ S(S(x))); this is provable from the Peano axioms, a finite loop of size three is prohibited by ∀x (x ≠ S(S(S(x)))); this is provable from the Peano axioms, and so on.

I don’t see what property is true of zero and all of its successors that cannot be true of an infinite chain, every member of which is a successor to a different number.

Consider the chains 0, 1, 2, 3, … and 0|, 1|, 2|, 3|, …. Let P(n) be “n is a number without a ‘|’.” This is true for every number in the first chain (“zero and all of its successors”), and not true in the other chain. Thus the set of numbers in either of those two chains is not a model of the Peano axioms. This is exactly what Eliezer said, but I thought it might be helpful for you to see it visually.

Something you should note is that properties are fundamentally extensionally defined in mathematics. So Q(n) := “n is a natural number less than 3” is actually, fundamentally, the collection of numbers {0,1,2}. It does not need a description. Math is structurally based. The names we give numbers and properties don’t matter; it is the relationships between them that we care about. So if we define the same rules about the “magically change” operation and the starting number “fairy” that we do about the “successor” operation and the starting number “zero” in the Peano axioms, you are referring to the same structure. Read the beginning of Truly Part of You.

As for the proof of this fact, it’s unfortunately not trivial. A mathematical logic professor at your local college could probably explain it to you. Also, see this page. The proof is incomplete, but you could check the reference there, if you’re curious.

I would also ask if it is any different to make the change “IFF two numbers have the same successor, they are the same number.”

“Only if two numbers have the same successor are they the same number.” This is a tautology in logic. If two numbers are really the same, everything about them must be the same, including their successors (and it is kind of wrong to say “two” in the first place). The other direction of implication is non-trivial, and thus it must be included in the Peano axioms.

• That proof only applies to well ordered sets. The set [… -a, o, a, …] has no least element, and is therefore not well ordered.

• The integers with that ordering are not a model of the Peano axioms! (By the way, the integers can be well-ordered: 0, −1, 1, −2, 2, …)

Read the Wikipedia article on the Peano axioms up through the first paragraph in the models section. Do you disagree with the article’s statement that “Any two models of the Peano axioms are isomorphic.”? If so, why? It has been proven by at least Richard Dedekind and Seth Warner; it also seems intuitively true to me. Given that, I’d need strong evidence to change my position.

If you don’t disagree with that, what is it, specifically, that Eliezer said in this post that you disagree with? I had a lot of trouble understanding your original top-level comment, but it seemed like you had a contrarian tone. If I misinterpreted your original comment, forgive me; I have no criticism of it except for that it was not clearly worded. But I think Eliezer’s post was very well-written and useful.

• Specifically: I don’t think that the listed constraints are sufficient to uniquely describe exactly the set natural numbers. If the property ‘Is either zero or in the succession chain of zero’ is allowed as a property of a number, then the roundabout description “every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers” is logically identical with “There are no chains that do not start with zero.”

Why not simply state that directly as part of your definition, since it is the intent? It can even be done in first-order logic with a little bit of work somewhat above my level.

EDIT: I hadn’t considered that every finite and countably infinite set could be well-ordered by fiat. It seems counterintuitive that any number could be defined to be the least element, but there’s nothing in definition of well-ordered that prevents selecting an arbitrary least element.

• I think we’re in agreement with each other. The integers are not well-ordered by ‘<’ as ‘<’ is traditionally interpreted; they are well-ordered by other, different relations (that can be formalized in logic); see the Wikipedia article on well-ordering.

The reason that we don’t start out with “There are no chains that do not start with zero.” is, I speculate, at least two-fold:

1. In Peano’s original axioms, he used the other formulation. So there is a precedent.

2. Peano’s formulation can be expressed easily in first-order Peano arithmetic. Peano’s formulation describes what is going on within the system; whereas “there are no chains that do not start with zero” is discussing the structure of the system from the outside. They do come out equivalent (I think) in second-order logic, but Peano’s formulation is the one that is easily expressed in first-order logic.

• “All natural numbers can be generated by iterating the successor function on zero.” “The smallest set k which includes zero and the successor of every member of itself is the set of natural numbers.”

I think that both of those formulations can be phrased in first-order logic...

• Properties are sets of numbers, so without getting into technicalities, you need second-order logic to talk about the smallest set such that whatever (since you need to quantify over all candidate sets).

Similarly, to say that you can get x by iterating the successor function on zero requires second-order logic. First-order logic isn’t even sufficient to define addition without adding axioms for what addition does.

• If you use set theory, then yes. Usually, however, mathematicians don’t want to have to worry about things like the axiom of regularity when all they wanted to talk about in the first place was the natural numbers!

• You can’t talk about what the natural numbers are and are not without some form of set theory.

“0 is the only number which is not the successor of any number” requires set theory to be meaningful.

• You can’t talk about what the natural numbers are and are not without some form of set theory.

But you can talk about some of the properties they have, and quite often that is all we care about.

Also, the stronger your system is, the more likely it is that your formulation is inconsistent (and if the system is inconsistent, you’re definitely not describing anything meaningful). I’m much more confident that first-order Peano arithmetic is consistent than I am that first-order ZFC set theory is consistent.

• No. You can rephrase that as: “Every natural number is either 0 or the successor of some number”.

• What does “Every x” mean in the absence of set theory?

• Enjoy A Problem Course in Mathematical Logic. Read Definition 6.4, Definition 6.5, and Definition 6.6 (Edit: They are on PDF pages 47-50, book pages 35-38.). It means that, within each model of the axioms, it is the case that every object in the model has the specified property. The natural numbers happen to be a model of first-order Peano arithmetic.

Let me ask you what “every x” means in first-order ZFC set theory. Answer carefully—it has a countable model.

• I think that this is the sort of case in which it is useful to do some hand-waving to indicate that you’ve realized that your reasoning was wrong but that you have additional reasoning to back up your conclusion, as otherwise it can appear that you’ve realized your reasoning was wrong but want to stick to the conclusion anyway, and therefore need to come up with new reasoning to support it.

Anyway, consider the following: let Pn(x) mean “x is the nth successor of 0” (where the 0th successor of a number is itself). Then, by induction, every number x has the property “there exists some n such that Pn(x)”.

I don’t think that change has an effect, you’re just adding “if two numbers are the same number, they have the same successor”, right? Which is already true.

• Is zero the zeroth successor of zero, by that property? Is that compatible with zero not being a successor of any number?

• As I said, the “zeroth successor” of a number is itself. That is, zero is the result of applying the successor function to itself zero times. You have to apply a function at least once in order to have applied the function (and thus obtained a result of applying the function, e.g., calculated a successor).

If you don’t like the term, you can think of it this way:

• P0(x): x = 0

• P1(x): x = S0

• P2(x): x = SS0

and so forth.

• S0 != 0

Suppose Sx != x. Then, if SSx = Sx, then Sx is the successor of both x and Sx, so x=Sx. This is false by assumption, so SSx != Sx.

Thus, by “And every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers.”, for no number x is Sx=x.

Is there something wrong with this reasoning?

• EY is talking from a position of faith that infinite model theory and second-order logic are good and reasonable things.

It is possible to instead start from a position of doubt that the infinite model theory and second order logic are good and reasonable things (based on my memory of having studied in college whether model theory and second order logic can be formalized within Zermelo-Frankel set theory, and what the first-order-ness of Zermelo-Frankel has to do with it.).

We might be fine with a proof-theoretic approach, which starts with the same ideas “zero is a number”, “the successor of a number is a different number”, but then goes to a proof-theoretic rule of induction something like “I’d be happy to say ‘All numbers have such-and-such property’ if there were a proof that zero has that property and another also proof that if a number has that property, then its successor also has that property.”

We don’t need to talk about models at all—in particular we don’t need to talk about infinite models.

Second-order arithmetic is sufficient to get what EY wants (a nice pretty model universe) but I have two objections. First it is too strong—often the first sufficient hammer that you find in mathematics is rarely the one you should end up using. Second, the goal of a nice pretty model universe presumes a stance of faith in (infinite) model theory, but the infinite model theory is not formalized. If you do formalize it then your formalization will have alternative “undesired” interpretations (by Lowenheim-Skolem).

• EY is talking from a position of faith that infinite model theory and second-order logic are good and reasonable things.

I think this is a fallacy of gray. Mathematicians have been using infinite model theory and second-order logic for a while, now; this is strong evidence that they are good and reasonable.

Edit: Link formatting, sorry. I wish there was a way to preview comments before submitting....

• Second-order logic is not part of standard, mainstream mathematics. It is part of a field that you might call mathematical logic or “foundations of mathematics”. Foundations of a building are relevant to the strength of a building, so the name implies that foundations of mathematics are relevant to the strength of mainstream mathematics. A more accurate analogy would be the relationship between physics and philosophy of physics—discoveries in epistemology and philosophy of science are more often driven by physics than the other way around, and the field “philosophy of physics” is a backwater by comparison.

As is probably evident, I think the good, solid mathematical logic is intuitionist and constructive and higher-order and based on proof theory first and model theory only second. It is easy to analogize from their names to a straight line between first-order, second-order, and higher-order logic, but in fact they’re not in a straight line at all. First-order logic is mainstream mathematics, second-order logic is mathematical logic flavored with faith in the reality of infinite models and set theory, and higher-order logic is mathematical logic that is (usually) constructive and proof-theoretic and built with an awareness of computer science.

• Your view is not mainstream.

• To an extent, but I think it’s obvious that most mathematicians couldn’t care less whether or not their theorems are expressible in second-order logic.

• Yes, because most mathematicians just take SOL at face value. If you believe in SOL and use the corresponding English language in your proofs—i.e., you assume there’s only one field of real numbers and you can talk about it—then of course it doesn’t matter to you whether or not your theorem happens to require SOL taken at face value, just like it doesn’t matter to you whether your proof uses ~~P->P as a logical axiom. Only those who distrust SOL would try to avoid proofs that use it. That most mathematicians don’t care is precisely how we know that disbelief in SOL is not a mainstream value. :)

• The standard story is that everything mathematicians prove is to be interpreted as a statement in the language of ZFC, with ZFC itself being interpreted in first-order logic. (With a side-order of angsting about how to talk about e.g. “all” vector spaces, since there isn’t a set containing all of them—IMO there are various good ways of resolving this, but the standard story considers it a problem; certainly in so far as SOL provides an answer to these concerns at all, it’s not “the one” answer that everybody is obviously implicitly using.) So when they say that there’s only one field of real numbers, this is supposed to mean that you can formalize the field axioms as a ZFC predicate about sets, and then prove in ZFC that between any two sets satisfying this predicate, there is an isomorphism. The fact that the semantics of first-order logic don’t pin down a unique model of ZFC isn’t seen as conflicting with this; the mathematician’s statement that there is only one complete ordered field (up to isomorphism) is supposed to desugar to a formal statement of ZFC, or more precisely to the meta-assertion that this formal statement can be proven from the ZFC axioms. Mathematical practice seems to me more in line with this story than with yours, e.g. mathematicians find nothing strange about introducing the reals through axioms and then talk about a “neighbourhood basis” as something that assigns to each real number a set of sets of real numbers—you’d need fourth-order logic if you wanted to talk about neighbourhood bases as objects without having some kind of set theory in the background. And people who don’t seem to care a fig about logic will use Zorn’s lemma when they want to prove something that uses choice, which seems quite rooted in set theory.

Now I do think that mathematicians think of the objects they’re discussing as more “real” than the standard story wants them to, and just using SOL instead of FOL as the semantics in which we interpret the ZFC axioms would be a good way to, um, tell a better story—I really like your post and it has convinced me of the usefulness of SOL—but I think if we’re simply trying to describe how mathematicians really think about what they’re doing, it’s fairer to say that they’re just taking set theory at face value—not thinking of set theory as something that has axioms that you formalize in some logic, but seeing it as as fundamental as logic itself, more or less.

• Um, I think when an ordinary mathematician says that there’s only one complete ordered field up to isomorphism, they do not mean, “In any given model of ZFC, of which there are many, there’s only one ordered field complete with respect to the predicates for which sets exist in that model.” We could ask some normal mathematicians what they mean to test this. We could also prove the isomorphism using logic that talked about all predicates, and ask them if they thought that was a fair proof (without calling attention to the quantification over predicates).

Taking set theory at face value is taking SOL at face value—SOL is often seen as importing set theory into logic, which is why mathematicians who care about it all are sometimes suspicious of it.

• Um, I think when an ordinary mathematician says that there’s only one complete ordered field up to isomorphism, they do not mean, “In any given model of ZFC, of which there are many, there’s only one ordered field complete with respect to the predicates for which sets exist in that model.” We could ask some normal mathematicians what they mean to test this.

The standard story, as I understand it, is claiming that models don’t even enter into it; the ordinary mathematician is supposed to be saying only that the corresponding statement can be proven in ZFC. Of course, that story is actually told by logicians, not by people who learned about models in their one logic course and then promptly forgot about them after the exam. As I said, I don’t agree with the standard story as a fair characterization of what mathematicians are doing who don’t care about logic. (Though I do think it’s a coherent story about what the informal mathematical English is supposed to mean.)

Taking set theory at face value is taking SOL at face value—SOL is often seen as importing set theory into logic, which is why mathematicians who care about it all are sometimes suspicious of it.

Is it a fair-rephrasing of your point that what normal mathematicians do requires the same order of ontological commitment as the standard (non-Henkin) semantics of SOL, since if you take SOL as primitive and interpret the ZFC axioms in it, that will give you the correct powerset of the reals, and if you take set theory as primitive and formalize the semantics of SOL in it, you will get the correct collection of standard models? ’Cause I agree with that (and I see the value of SOL as a particularly simple way of making that ontological commitment, compared to say ZFC). My point was that mathematical English maps much more directly to ZFC than it does to SOL (there’s still coding to be done, but much less of it when you start from ZFC than when you start from SOL); e.g., you earlier said that “[o]nly those who distrust SOL would try to avoid proofs that use it”, and you can’t really use ontological commitments in proofs, what you can actually use is notions like “for all properties of real numbers”, and many notions people actually use are ones more directly present in ZFC than SOL, like my example of quantifying over the neighbourhood bases (mappings from reals to sets of sets of reals).

• So to take your example of real numbers—if someone didn’t want to use SOL, they would still prove the same theorems, they would just end up proving that they are true for any Archimedean complete totally ordered field. In general, I think most mathematics (i.e. mathematics outside set theory and logic) is robust with respect to foundations: rarely is it the case that a change in axioms makes a proof invalid, it just means you’re talking about something slightly different. The idea of the proof is still preserved.

• I agree with this statement—and yet you did not contradict my statement that second order logic is also not part of mainstream mathematics.

A topologist might care about manifolds or homeomorphisms—they do not care about foundations of mathematics—and it is not the case that only one foundation of mathematics can support topology. The weaker foundation is preferable.

• The last sentence is not obvious at all. The goal of mathematics is not to be correct a lot. The goal of mathematics is to promote human understanding. Strong axioms help with that by simplifying reasoning.

• If you assume A and derive B you have not proven B but rather A implies B. If you can instead assume a weaker axiom Aprime, and still derive B, then you have proven Aprime implies B, which is stronger because it will be applicable in more circumstances.

• In what “circumstances” are manifolds and homeomorphisms useful?

• If you were writing software for something intended to traverse the Interplanetary transfer network then you would probably use charts and atlases and transition functions, and you would study (symplectic) manifolds and homeomorphisms in order to understand those more-applied concepts.

If an otherwise useful theorem assumes that the manifold is orientable, then you need to show that your practical manifold is orientable before you can use it—and if it turns out not to be orientable, then you can’t use it at all. If instead you had an analogous theorem that applied to all manifolds, then you could use it immediately.

• There’s a difference between assuming that a manifold is orientable and assuming something about set theory. The phase space is, of course, only approximately a manifold. On a very small level it’s—well, something we’re not very sure of. But all the math you’ll be doing is an approximation of reality.

So some big macroscopic feature like orientability would be a problem to assume. Orientability corresponds to something in physical reality, and something that clearly matters for your calculation.

The axiom of choice or whatever set-theoretic assumption corresponds to nothing in physical reality. It doesn’t matter if the theorems you are using are right for the situation, because they are obviously all wrong, because they are about symplectic dynamics on a manifold, and physics isn’t actually symplectic dynamics on a manifold! The only thing that matters is how easily you can find a good-enough approximation to reality. More foundational assumptions make this easier, and do not impede one’s approximation of reality.

Note that physicists frequently make arguments that are just plain unambiguously wrong from a mathematical perspective.

• I understand your point—it’s akin to the Box quote “all models are wrong but some are useful”—when choosing among (false) models, choose the most useful one. However, it is not the case that stronger assumptions are more useful—of course stronger assumptions make the task of proving easier, but the task as a whole includes both proving and also building a system based on the theorems proven.

My primary point is that EY is implying that second-order logic is necessary to work with the integers. People work with the integers without using second-order logic all the time. If he said that he is only introducing second-order logic for convenience in proving and there are certainly other ways of doing it, and that some people (intuitionists and finitists) think that introducing second-order logic is a dubious move, I’d be happy.

My other claim that second-order logic is unphysical and that its unphysicality probably does ripple out to make practical tasks more difficult, is a secondary one. I’m happy to agree that this secondary claim is not mainstream.

• My primary point is actually that I don’t care if math is useful. Math is awesome. This is obviously an extremely rare viewpoint, but very common among.

But I do agree with that quote, more or less. I think that potentially some models are true, but those models are almost certainly less useful for most purposes than the crude and easy to work with approximations.

I agree that second-order logic is not necessary to work with the integers. Second-order logic is necessary to work with the integers and only the integers, however. Somewhat problematically, it’s not actually possible to work with second-order logic.

What sort of practical tasks are you thinking of?

• Well, it’s strong evidence that mathematicians find these things useful for publishing papers.