# 0.999...=1: Another Rationality Litmus Test

People seemed to like my post from yesterday about infinite summations and how to rationally react to a mathematical argument you’re not equipped to validate, so here’s another in the same vein that highlights a different way your reasoning can go.

(It’s probably not quite as juicy of an example as yesterday’s, but it is one that I’m equipped to write about today so I figure it’s worth it.)

This example is somewhat more widely known and a bit more elementary. I won’t be surprised if most people already know the ‘solution’. But the point of writing about it is not to explain the math—it’s to talk about “how you should feel” about the problem, and how to rationally approach rectifying it with your existing mental model. If you already know the solution, try to pretend or think back to when you didn’t. I think it was initially surprising to most people, whenever you learned it.

The claim: that 1 = 0.999… repeating (infinite 9s). (I haven’t found an easy way to put a bar over the last 9, so I’m using ellipses throughout.)

The questionable proof:

x = 0.9999...

10x = 9.9999… (everyone knows multiplying by ten moves the decimal over one place)

10x-x = 9.9999… − 0.9999....

9x = 9

x = 1

People’s response when they first see this is usually: wait, what? an infinite series of 9s equals 1? no way, they’re obviously different.

The litmus test is this: what do you think a rational person should do when confronted with this argument? How do you approach it? Should you accept the seemingly plausible argument, or reject it (as with yesterday’s example) as “no way, it’s more likely that we’re somehow talking about different objects and it’s hidden inside the notation”?

Or are there other ways you can proceed to get more information on your own?

One of the things I want to highlight here is related to the nature of mathematics.

I think people have a tendency to think that, if they are not well-trained students of mathematics (at least at the collegiate level), then *rigor* or *precision* involving numbers is out of their reach. I think this is *definitely not the case*: you should not be afraid to attempt to be precise with numbers even if you only know high school algebra, and you should especially not be afraid to demand precision, even if you don’t know the correct way to implement it.

Particularly, I’d like to emphasize that mathematics as a *mental discipline *(as opposed to an academic field), basically consists of “the art of making correct statements about patterns in the world” (where numbers are one of the patterns that appears everywhere you have things you can count, but there are others). This sounds suspiciously similar to rationality—which, as a practice, might be “about winning”, but as a mental art is “about being right, and not being wrong, to the best of your ability”. More or less. So mathematical thinking and rational thinking are very similar, except that we categorize rationality as being primarily about decisions and real-world things, and mathematics as being primarily about abstract structures and numbers.

In many cases in math, you start with a structure that you don’t understand, or even know how to understand, precisely, and start trying to ‘tease’ precise results out of it. As a layperson you might have the same approach to arguments and statements about elementary numbers and algebraic manipulations, like in the proof above, and you’re just as in the right to *attempt to find precision* in them as a professional mathematician is when they perform the same process on their highly esoteric specialty. You also have the bonus that you can go look for the right answer to see how you did, afterwards.

All this to say, I think any rational person should be willing to ‘go under the hood’ one or two levels when they see a proof like this. It doesn’t have to be rigorous. You just need to do some poking around if you see something surprising to your intuition. Insights are readily available if you look, and you’ll be a stronger rational thinker if you do.

There are a few angles that I think a rational but untrained-in-math person can think to take straightaway.

You’re shown that 0.9999.. = 1. If this is a surprise, that means your model of what these terms mean doesn’t jive with how they behave in relation to each other, or that the proof was fallacious.** You can immediately conclude that it’s either**:

a) **true without qualification**, in which case your mental model of what the symbols “0.999...”, “=”, or “1” mean is suspect

b) **true ***in a sense*,* *but it’s hidden behind a deceptive argument (like in yesterday’s post), and even if the sense is more technical and possibly beyond your intuition, it should be possible to verify if it exists—either through careful inspection or turning to a more expert source or just verifying that options (a) and (c) don’t work

c) **false**, in which case there should be a logical inconsistency in the proof, though it’s not necessarily true that you’re equipped to find it

**Moreover, (a) is probably the default, by Occam’s Razor**. It’s more likely that a seemingly correct argument is correct than that there is a more complicated explanation, such as (b), “there are mysterious forces at work here”, or (c), “this correct-seeming argument is actually wrong”, without other reasons to disbelieve it. The only evidence against it is basically that it’s surprising. But how do you test (a)?

Note there are plenty of other ‘math paradoxes’ that fall under (c) instead: for example, those ones that secretly divide by 0 and derive nonsense afterwards. (a=b ; a^2=ab ; a^2-b^2=ab-b^2 ; (a+b)(a-b)=b(a-b) ; a+b=b ; 2a = a ; 2=1). But the difference is that their conclusions are *obviously* false, whereas this one is only *surprising* and counterintuitive. 1=2 involves two concepts we know very well. 0.999...=1 involves one we know well, but one that likely has a feeling of sketchiness about it; we’re not used to having to think carefully about what a construction like 0.999… means, and we should immediately realize that when doubting the conclusion.

Here are a few angles you can take to testing (a):

**1. The “make it more precise” approach**: Drill down into what you mean by each symbol. In particular, it seems very likely that the mystery is hiding inside what “0.999...” means, because that’s the one that it’s seems complicated and liable to be misunderstood.

What does 0.999… infinitely repeating actually mean? It seems like it’s “the limit of the series of finite numbers of 9s”, if you know what a limit is. It seems like it might be “the number larger than every number of the form 0.abcd..., consisting of infinitely many digits (optionally, all 0 after a point)”. That’s awfully similar to 1, also, though.

A very good question is “what kinds of objects are these, anyway?” The rules of arithmetic generally assume we’re working with real numbers, and the proof seems to hold for those in our customary ruleset. So what’s the ‘true’ definition of a real number?

Well, we can look it up, and find that it’s fairly complicated and involves identifying reals with sets of rationals in one or another specific way. If you can parse the definitions, you’ll find that one definition is “a real number is a Dedekind cut of the rational numbers”, that is, “a partition of the rational numbers into two sets A and B such that A is nonempty and closed downwards, B is nonempty and closed upwards, and A contains no greatest element”, and from that it Can Be Seen (tm) that the two symbols “1” and “0.999...” both refer to the *same* partition of Q, and therefore are equivalent as real numbers.

**2. The “functional” approach**: if 0.999...=1, then it should behave the same as 1 in all circumstances. Is that something we can verify? Does it survive obvious tests, like other arguments of the same form?

Does 0.999.. always act the same was that 1 does? It appears to act the same in the algebraic manipulations that we saw, of course. What are some other things to try?

We might think to try: 1-0.9999… = 1-1 = 0, but also seems to equal 0.000....0001, if that’s valid: an ‘infinite decimal that ends in a 1’. So those must be equivalent also, if that’s a valid concept. We can’t find anything to multiply 0.000...0001 by to ‘move the decimal’ all the way into the finite decimal positions, seemingly, because we would have to multiply by infinity and that wouldn’t prove anything because we already know such operations are suspect.

I, at least, cannot see any reason when doing math that the two *shouldn’t* be the same. It’s not proof, but it’s evidence that the conclusion is probably OK.

**3. The “argument from contradiction” approach:** what would be true if the claim were false?

If 0.999… isn’t equal to 1, what does that entail? Well, let a=0.999… and b=1. We can, according to our familiar rules of algebra, construct the number halfway between them: (a+b)/2, alternatively written as a+(b-a)/2. But our intuition for decimals doesn’t seem to let there be a number between the two. What would it be -- 0.999...9995? “capping” the decimal with a 5? (yes, we capped a decimal with a 1 earlier, but we didn’t know if that was valid either). What does that mean imply 0.999 − 0.999...9995 should be? 0.000...0004? Does that equal 4*0.000...0001? None of this math seems to be working either.

As long as we’re not being rigorous, this isn’t “proof”, but it is a compelling reason to think the conclusion might be right after all. If it’s not, we get into things that seem considerably more insane.

**4. The “reexamine your surprise” approach**: how bad is it if this is true? Does that cause me to doubt other beliefs? Or is it actually just as easy to believe it’s true as not? Perhaps I am just biased against the conclusion for aesthetic reasons?

How bad is it if 0.999...=1? Well, it’s not like yesterday’s example with 1+2+3+4+5… = −1/12. It doesn’t *utterly defy* our intuition for what arithmetic is. It says that one object we never use is equivalent to another object we’re familiar with. I think that, since we probably have no reason to strongly believe anything about what an infinite sum of ^{9}⁄_{10} + ^{9}⁄_{100} + 9/1000 + … should equal, it’s perfectly palatable that it might equal 1, despite our initial reservations.

(I’m sure there are other approaches too, but this got long with just four so I stopped looking. In real life, if you’re not interested in the details there’s always the very legitimate fifth approach of “see what the experts say and don’t worry about it”, also. I can’t fault you for just not caring.)

By the way, the conclusion that 0.999...=1 is completely, unequivocally true in the real numbers, basically for the Dedekind cut reason given above, which is the commonly accepted structure we are using when we write out mathematics if none is indicated. It is possible to find structures where it’s not true, but you probably wouldn’t write 0.999… in those structures anyway. It’s not like 1+2+3+4+5...=-1/12, for which claiming truth is wildly inaccurate and outright deceptive.

But note** **that** none of these approaches are out of reach to a careful thinker, even if they’re not a mathematician**. Or even mathematically-inclined.

So it’s not required that you have the finesse to work out detailed mathematical arguments—certainly the definitions of real numbers are too precise and technical for the average layperson to deal with. The question here is whether you take math statements at face value, or disbelieve them automatically (you would have done fine yesterday!), or pick the more rational choice—breaking them down and looking for low-hanging ways to convince yourself one way or the other.

When you read a surprising argument like the 0.999...=1 one, does it occur to you to break down ways of inspecting it further? To look for contradictions, functional equivalences, second-guess your surprise as being a run-of-the-mill cognitive bias, or seek out precision to realign your intuition with the apparent surprise in ‘reality’?

I think it should. Though I am pretty biased because I enjoy math and study it for fun. But—if you subconsciously treat math as something that other people do and you just believe what they say at the end of the day, why? Does this cause you to neglect to rationally analyze mathematical conclusions, at whatever level you might be comfortable with? If so, I’ll bet this isn’t optimal and it’s worth isolating in your mind and looking more closely at. Precise mathematical argument is essentially just rationalism applied to numbers, after all. Well—plus a lot of jargon.

(Do you think I represented the math or the rational arguments correctly? is my philosophy legitimate? Feedback much appreciated!)

- 22 Jan 2017 20:30 UTC; 1 point) 's comment on What would you like to see posts about? by (

I agree that a careful thinker confronted with this puzzle for the first time should eventually conclude that the crux is what exactly the expression “0.999...” actually means. At this point, if you don’t know enough math to give a rigorous definition, I think a reasonable response is “I thought I knew what it meant to have an infinite number of 9s after the decimal point, but maybe I don’t, and absent me actually learning the requisite math to make sense of that expression I’m just going to be agnostic about its value.”

Here’s an argument in favor of doing that. Consider the following proof, nearly identical to the one you present. Let’s consider the number x = …999; in other words, now we have infinitely many 9s to the

leftof the decimal point. What is this number? Well,10x = …9990

x − 10x = 9

-9x = 9

x = −1.

There are a couple of reasonable responses you could have to this argument. Two of them require knowing some math: one is enough math to explain why the expression …999 describes the limit of a sequence of numbers that has no limit, and one is knowing even more math than that, so you can explain in what sense it

doeshave a limit (the details here resemble the details of 1 + 2 + 3 + … but are technically easier). I think in the absence of the requisite math knowledge, seeing this argument side by side with the original one makes a pretty strong case for “stay agnostic about whether this notation is meaningful.”And on the third hand, I can’t resist saying one more thing about infinite sequences of decimals to the left. Consider the following sequence of computations:

5^2 = 25

25^2 = 625

625^2 = 390625

0625^2 = 390625

90625^2 = 8212890625

890625^2 = 793212890625

It sure looks like there is an infinite decimal going to the left, x, with the property that x^2 = x, and which ends …890625. Do you agree? Can you find, say, 6 more of its digits, assuming it exists? What’s up with that? Is there another x with this property? (Please don’t spoil the answer if you know what’s going on here without some kind of spoiler warning or e.g. rot13.)

My first guess is that this was caused by five being half of ten, and so if we wanted to have the same property in hexadecimal we would instead be looking at the progression based on eight. But that didn’t work, and so now I’m suspecting that it’s also important that five is odd. (It works if you start with three in base six, which makes me guess those are the primary requirements, but it might also be important to be prime. (Looking at the progression starting with x=nine in base eighteen, that looks right, and base fourteen provides more confirmation.)

Actually, the most important number-theoretic property that drives this phenomenon is the fact that ten has more than one prime factor. It can’t happen in a prime or prime power base (and these are basically the same thing for these purposes anyway). The rule describing which initial digits make things work is more complicated; for starters, try six in base ten, then try base fifteen.

If you want to look up more, the keyword is p-adic numbers. Here we’re working in a number system called the ten-adic numbers. The ten-adic numbers form a commutative ring, which basically means you can add and multiply them and the usual laws of algebra you’re familiar with will apply. (This is something you can verify for yourself: that it always makes sense to add and multiply infinite decimals to the left. You just keep carrying further and further to the left.) But unlike the real numbers, they don’t form a field, which means you can’t always divide by a nonzero ten-adic number.

My gut response (I can’t reasonably claim to know math above basic algebra) is:

Infinite sequences of numbers to the right of the decimal point are in some circumstances an artifact of the base. In base 3,

^{1}⁄_{3}is 0.1 and^{1}⁄_{10}is 0.00220022..., but^{1}⁄_{10}“isn’t” an infinitely repeating decimal and^{1}⁄_{3}“is”—in base 10, which is what we’re used to. So, heuristically, we should expect that some infinitely repeating representations of numbers are equal to some representations that aren’t infinitely repeating.If 0.999… and 1 are different numbers, there’s nothing between 0.999… and 1, which doesn’t jive with my intuitive understanding of what numbers are.

The integers don’t run on a computer processor. Positive integers can’t wrap around to negative integers. Adding a positive integer to a positive integer will always give a positive integer.

0.999… is 0.9 + 0.09 + 0.009 etc, whereas …999.0 is 9 + 90 + 900 etc. They must both be positive

~~integers~~.There is no finite number larger than …999.0. A finite number must have a finite number of digits, so you can compute …999.0 to that many digits and one more. So there’s nothing ‘between’ …999.0 and infinity.

Infinity is not the same thing as negative one.

All I have to do to accept that 0.999… is the same thing 1 is accept that some numbers can be represented in multiple ways. If I don’t accept this, I have to reject the premise that two numbers with nothing ‘between’ them are equal—that is, if 0.999… != 1, it’s not the case that for any x and y where x != y, x is either greater than or less than y.

But if I accept that …999.0 is equal to −1, I have to accept that adding together some positive numbers can give a negative number, and if I reject it, I just have to say that multiplying an infinite number by ten doesn’t make sense. (This feels like it’s wrong but I don’t know why.)

I think you mean “they must both be positive” here, but 0.999… isn’t guaranteed to be an integer a priori.

Aside from that, everything you’ve said is basically correct.

But… well, there’s something pretty interesting going on with infinite decimals to the left. For numbers that don’t exist they sure do have a lot of interesting properties. This might be worth a top-level post.Interesting, I’ve never looked closely at these infinitely-long numbers before.

In the first example, It looks like you’ve described the infinite series 9(1+10+10^2+10^3...), which if you ignore radii of convergence is 9*1/(1-x) evaluated at x=10, giving 9/-9=-1. I assume without checking that this is what Cesaro or Abel summation of that series would give (which is the technical way to get to 1+2+3+4..=-1/12 though I still reject that that’s a fair use of the symbols ‘+’ and ‘=’ without qualification).

Re the second part: interesting. Nothing is immediately coming to mind.

Yes, this is one way of justifying the claim that −1 is the “right” answer, via analytic continuation of the function 9/(1 - x). But there’s another arguably more fun way involving making rigorous sense of infinite decimals going to the left in general.

Cesaro and Abel summation don’t assign a value to either of these series.

(answer: http://www.numericana.com/answer/p-adic.htm#integers)

This has no sense, really.

I think a reasonable position is “I personally do not know how to make sense of this notation,” but are you claiming that “nobody knows how to make sense of this notation”? Would you be willing to make a bet to that effect, and at what odds, for how much money?

I am saying you cannot write

…9990- the decimal point, then an infinite number of 9s and then the last zero!Okay, perhaps you can in some other axiomatic system. But not for the ordinary real numbers.

Sure. What is different about the situation with 0.999...? How do you know that that is a sensible name for a real number?

0.999… is the limit of 9/10+9/100+9/1000+...

...9990 is what?

Thomas, I think you may be misunderstanding what Qiaochu_Yuan is trying to do here, which is

notto argue that 0.999… actually might (for allheknows, or for allyouknow) be something other than 1, nor to argue that any particular other non-standard[1] construction actually might (for all he or you know) have a coherent meaning.Rather, he is saying:

someone who hasn’t come across this stuff beforemight reasonably not see any important difference between these constructions (if the difference seems obvious to you, it’s only because youhaveseen it before; it took mathematicians a long time to figure out how to think correctly about these things) and adopt parallel attitudes to them. This would be reasonable, and rational in any sense thatdoesn’trequire something like logical omniscience.It seems as if you are arguing against the first sort of claim (which I believe QY is not making) rather than the second (which I believe he is making).

[1] In the sense of “not usually used in mathematics”, not that of “model of real analysis with infinities, infinitesimals and a transfer principle”.

I see your angle now. Perhaps his angle, too.

What I am trying to achieve here is to present the current official math position. Not that I agree with it—it’s too generous to infinities for my taste, but who am I to judge—but I still want to explain this official math position.

It is possible that I am somehow wrong doing that, but still, I do try.

What I am basically saying is that because there is no finite positive real epsilon, such that 1-0.9999… would be equal to that epsilon, therefore those two should be equal.

If they weren’t equal, there would be such a positive epsilon, which would be equal to their difference. But there isn’t. If you postulate one such an epsilon, a FINITE number of 9s already yields to a smaller difference—therefore contradicts your assumption.

This is the official math position as I understand it. I might be wrong about that, but I don’t think I am.

Qiauchu_Yuan is a mathematician and I’m quite sure he’s familiar with the “current official math position”. I don’t think your presentation of it is wrong, but I think it’s

unnecessaryin this particular discussion :-). When you say there are no real numbers between 0.999… and 1 and therefore the two are equal, you are not disagreeing with QY but with a hypothetical person he’s postulated, whose knowledge of mathematics is much less than either yours or QY’s.I was able to make sense of this argument through the (rather unsophisticated) reasoning that 0.333...=

^{1}⁄_{3}, and multiplying both sides by three gives 0.999...=1. (I’m not sure if this actually adds anything, but it was how I made myself believe the validity of the argument)But how do you know that 0.333… = 1/3? (And that multiplying an infinite decimal by 3 corresponds to multiplying each of its digits by 3?)

In the spirit of my comment, consider the analogous argument for infinite decimals to the left. Let x = …333. Then

3x = …999 = −1 (we established this earlier)

so x = −1/3. Are you satisfied with that?

Thank you for the reply! I get that 1/3= 0.333… from just dividing 1 by 3, but I do have a lack of understanding for what it means to multiply an infinite decimal by some integer. I appreciate the explanation with the decimals to the left!

This looks like a candidate for the not-yet-existing book “The Simple Math of Everything”. But the explanation would have to be the real explanation, involving how we construct the “real numbers” and why we construct them that way.

Cool, another one! I’m supposed to be sleeping now rather than working, so I can engage with this.

Infinity is weird, and it makes math weird. I think a fuzzy version of this belief is pretty widespread—look what you get when you do an image search for “divide by zero”, for example. For me, and I suspect for a lot of people with a

very littlegeneral math knowledge, “infinity” is a stop sign. Inquiry ends, shoulders are shrugged, hands are thrown up. “Of course it doesn’t appear to make sense—it’s got infinity it it!”.I don’t remember where I got this notion but it must have been early, because I remember seeing a version of the “disguise a division by zero > 1=2” trick in a book (Fermat’s Last Theorem by Simon Singh, if anyone’s interested) when I was about 14 and being baffled by it, and going over and over it trying to find the mistake. When I gave up and read on, and saw the explanation of how one of the canceled terms in the equation was zero, I was instantly

satisfied. “Oh,of course. It divides by zero which is a sneaky way of introducing infinity to the mix—so naturally the result makes no sense.”This is one of those situations where a little incomplete knowledge is actually worse than none—a person who hadn’t ever heard about the infinity-makes-everything-weird “rule” could see something like 0.999… = 1 and keep digging, instead of saying “yeah, that’s infinity for you, what can you do”.

The idea that infinity is some sort of magical spell that you can cast upon “real” math and turn it into a frog (using real in the everyday sense, not the math-sense) is obviously an irrational thought-stopper. It means you could present a

false statementto meand I wouldn’t question itso long as infinity was there to point to as the culprit.(If you’re able to quickly formulate an example of a superficially math-y looking proposition involving infinity that’s actually total BS, that would be awesome—I could use it in future conversations about the topic.)

By the way, I’m not talking about some version of me in the distant past—I realized that I use “infinity makes everything weird” as a thought-terminating cliche

five minutes ago. I didn’t realize I was exempting mathematics from the same sort of bias-questioning rationality I try to apply to everything else until you pointed it out.So, thanks for that—I still may not understand why 0.999… = 1, or how dividing by zero leads to results like 1=2, but at least from now on I won’t let a non-answer like “infinity did it!” kill my curiosity.

Infinities are okay if they come with a definition of convergence. For example, we can say that an infinite sequence of real numbers x1, x2, x3… “converges” to a real number y if every interval of the real line centered around y, no matter how small, contains all but finitely many elements of the sequence. For example, the sequence 1,

^{1}⁄_{2},^{1}⁄_{3},^{1}⁄_{4}… converges to 0, because every interval centered around 0 contains all but finitely many of 1,^{1}⁄_{2},^{1}⁄_{3},^{1}⁄_{4}… Some sequences don’t converge to anything, like 0, 1, 0, 1..., but it’s an easy exercise to prove that no sequence can converge to two different values at once.Now the only sensible way to understand 0.999… is to define it as whatever value 0.9, 0.99, 0.999… converges to. But that’s obviously 1 and that’s the end of the story for people who understand math.

You can use the same procedure for infinite sums. x1+x2+x3+… can be defined as whatever value x1, x1+x2, x1+x2+x3… converges to. For example, 1+1/2+1/4+1/8+… = 2, because the sequence of partial sums is 2-1, 2-1/2, 2-1/4, 2-1/8, … and converges to 2.

By now it should be clear that 1+2+3+4+… doesn’t converge to anything under our definition. But our definition isn’t the only one possible. You can make another self-consistent definition of convergence, where 1+2+3+4+… will indeed converge to −1/12. But that definition is complex, esoteric and much less useful than the regular one, which is why that viral video really shouldn’t have used it without remark.

Most paradoxes involving infinity are just pulling a fast one on you by not specifying what they mean by convergence. If you try to use the common sense definition above, or really any self-consistent way to assign values to infinite expressions, the paradoxes usually go away.

Here’s how dividing by zero leads to results like 1=2:

You may have heard that functions must be well-defined, which means x=y ⇒ f(x)=f(y). This property of functions is what allows you to apply any function to both sides of an equation and preserve truth doing it. If the function is one-to-one (ie x=y ⇔ f(x)=f(y)), truth is preserved both ways and you can un-apply a function from both sides of an equation as well. Multiplication by a factor c is one-to-one iff c isn’t 0. Therefore, un-applying multiplication by 0 is not in general truth-preserving.

Slightly off topic to the main point of the article, which is how to deal with not understanding something.

For anyone who wants to fully understand this, you need to read “Baby Rudin”: Rudin “Principles of Mathematical Analysis”.

Real numbers are defined as equivalence classes of convergent series. The series 1,1,1,1,1 and 0.9, 0.999 … are in the same equivalence class and so are the same real number. People often get caught up with the assumption that two different series (or two different decimal representations) must be different numbers.

If you study a bit more, though, it stops being necessarily true; https://arxiv.org/abs/1307.7392

Yes; Ignorance is followed by enlightenment which is followed by the fog of war.

The OP states:

This is just wrong. A rational number is a number that can be written as a fraction of two integers. Lots of infinite decimals are rational numbers.

^{1}⁄_{3}= .3333333...,^{1}⁄_{9}= .1111111.…^{1}⁄_{7}= .142857142857142857… etc.Ah, of course, my mistake. I was trying to hand-wave an argument that we should be looking at reals instead of rationals (which isn’t inherently true once you already know that 0.999...=1, but seems like it should be before you’ve determined that). I foolishly didn’t think twice about what I had written to see if it made sense.

I still think it’s true that “0.999...” compels you to look at the definition of real numbers, not rationals. Just need to figure out a plausible sounding justification for that.

I think the point is that you’re writing down “0.999...” and assuming that that must define a number at all. If you’re assuming that every decimal expression gives a number then you must be working with the reals.

I suppose you might be right for some people. For me, the fact that repeating infinite decimal expansions are rational is deeply deeply ingrained. Since your post is essentially how to square your feelings with what turns out to be mathematically true, you have a lot of room for disagreement as there is no contradiction in different people feeling different ways about the same facts.

For me the most fun thing about 0.9999.… is that

^{1}⁄_{9}= .11111… and therefore 9x1/9 = 9x.111111..… and this last expression obviously = .99999...You should also do a search on “right” in your post and edit it, you use “right” one time where you really need “write” I think it is “right down” instead of “write down” but I’ll let you do the looking.

Fixed the typo. Also changed the argument there entirely: I think that the easy reason to assume we’re talking about real numbers instead of rationals is just that that’s the default when doing math, not because 0.999… looks like a real number due to the decimal representation. Skips the problem entirely.

Part 3. “the argument from contradiction” approach dikd historically activate for me. Except I found a way where the operations make sense: I appriciate that it needs to make sense with your current undertstanding level. But argument from lack of imagination is a pretty lousy one. One could say that “x^2 = −1″ is absurd but considering what world would look like if it could be made true can be interesting and useful. By similar logic one could argue that negative numbers are “unreal”. I ended up recogninsing how the standard formulation is transfinite hostile. Instead of whether a reulst is possible or not you end up asking whether the rules are inevitable or not.

When I encountered this result in school for the first time, in the context of learning the algorithm for converting a repeating decimal into a fraction, I eventually reasoned “If 1 and 0.999… are different numbers, there ought to be a number between them, but there isn’t. So it must really be true that they’re the same.”

Of all the different explanations and interpretations people have been giving in this thread this is the most satisfying to my mathematically illiterate brain. It’s troublesome for me to grasp how 0.999… isn’t always just a bit smaller than 1 because my brain

wantsto think that even aninfinitely tinydifference is still a difference. But when you put it like that—there’s nowhere between the two where you can draw a line between them—it seems to click in. 0.999… hugs 1 so tight that you can’t meaningfully separate them.It’s instructive to set out the proof you give for 0.999...=1 in number bases other than ten. For example base eleven, in which the maximum value single digit is conventionally represented as A and amounts to 10 (base ten). 10 (base eleven) amounts to 11 (base ten). So

Let x = 0.AAA...

10x = A.AAA...

10x—x = A

Ax = A

x = 1

0.AAA… = 1

But 0. A (base eleven) =

^{10}⁄_{11}(base ten) which is bigger than 0.9 (base ten) =^{9}⁄_{10}(base ten). So shouldn’t that inequality apply to 0.AAA… (base eleven) and 0.999… (base ten) as well? (A debatable point maybe). If so, then they can’t both equal 1, unless we say something like 0.999...=1 and 0.AAA...=1 are both valid but base dependent equations, as indeed any such equation would be when using the top valued single digit of its base. This would mean 0.111...=1 in binary.f(x)=2/x

g(x)=1/x

f(x) > g(x) for all x but lim f(x) = lim g(x) = 0. Just becuause f gets there “later” does not mean it gets any less deep.

Repeating decimals are far enough removed from decimals its like mixing rationals and integers.

I think I see your first point.

0.A{base11} =

^{10}⁄_{11}0.9 =

^{9}⁄_{10}0.A − 0.9 = 0.0_09...

0.AA =

^{10}⁄_{11}+^{10}⁄_{121}0.99 =

^{9}⁄_{10}+^{9}⁄_{100}0.AA − 0.99 = 0.00_1735537190082644628099...

Does this mean that because the difference or “lateness” gets smaller tending to zero each time a single identical digit is added to 0.A and 0.9 respectively, then 0.A… = 0.9...?

(Whereas the difference we get when we do this to say 0.8 and 0.9 gets larger each time so we can’t say 0.8… = 0.9...)

No I believe you are reaching a different concept. It is true that the difference squashes towards 0 but that would be different line of thinking. In a contex where infinidesimal are allowed (ie non-real) we might associate the series to different amounts and indeed find that they differ by a “minuscule amount”. But as we normally operate on reals we only get a “real precision” result. For example if you had to say whether

^{3}⁄_{4}, 1 and^{5}⁄_{4}name which integers probalby your best bet would be that all of them name the same integer 1, if you are only restricted to integer precision. In the same way you might have 1 and 1-epsilon to be differnt numbers when infinidesimal accuracy is allowed but a real + anything infinidesimal is going to be the same real regardless of the infinidesimal (1 and 1-epsilon are the same real in real precision)What I was actually going fo is that, for any r < 1 you can ask how many terms you need to get up to that level and both series will give a finite answer. Ie to get to the same “depth” as 0.999999… gets with 6 digits you might need a bit less with 0.AAAAA… .It’s a “horizontal” difference instead of a “vertical” one. However there is no number that one of the series could reach but the other does not (and the number that both series fails to reach is 1, it might be helpful to remember that an suprenum is the smallest upper limit). if one series reaches a sum with 10 terms and other reaches the same sum in 10000 terms it’s equally good, we are only interested what happens “eventually” or after all terms have been accounted for. The way we have come up what the repeating digit sign means refers to limits and it’s pretty guaranteed to produce reals.

Not debatable, just false. Formally, the fact that xk<yk for all k does

notimply that limk→∞xk<limk→∞yk.If I were to poke a hole in the (proposed) argument that 0.[k 9s]{base 10} < 0.[k As]{base 11} (0.9<0.A; 0.99<0.AA;...), I’d point out that 0.[2*k 9s]{base 10} > 0.[k As]{base 11} (0.99>0.A; 0.9999>0.AA;...), and that this gives the opposite result when you take k→∞ (in the standard sense of those terms). I won’t demonstrate it rigorously here, but the faulty link here (under the standard meanings of real numbers and infinities) is that carrying the inequality through the limit just doesn’t create a necessarily-true statement.

0.111...{binary} is 1, basically for the Dedekind cut reason in the OP, which is not base-dependent (or representation-dependent at all) -- you can define and identify real numbers without using Arabic numerals or place value at all, and if you do that, then 0.999...=1 is as clear as not(not(true))=true.

0.9{base10}<0.99{base10} but 0.9...{base10}=0.99...{base10}

0.9{base10}<0.A{base11} but 0.9...{base10}=0.A...{base11}

0.8{base10}<0.9{base10} and 0.8...{base10}<0.9...({ase10}

0.9{base10}<0.A{base11} and 0.9...{base10}<0.A...{base11}

I’m not trying to prove “0.999...{base10}=1 “is false, nor that “0.111...(base2)=1” is either—in fact it’s an even more fascinating result.

Also “not(not(true))=true” is good enough for me as well.

You are assuming that there is a link between the per-term value and the whole series value. The connection just isn’t there and if you think it would be it would be important to show why.

I could have two small finite series of A=10 and B=2+3+5 and compare that 2<10, 3<10 and 5<10 and then be surprised when A=B. When the term amount is not finite it’s harder to verify thjat you haven’t made this kind of error.

So would you say that 0.999...(base10) = 0.AAA...(base11) = 0.111...(base2)= 1?

Yes, it happens to be that way.

Still not entirely convinced. If 0.A > 0.9 then surely0.A… > 0.9...?

Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A =

^{10}⁄_{11}and 0.9 =^{9}⁄_{10}, so 0.A > 0.9, but 0.A < 0.99.I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details.

**Appeal to graphical asymptotes**

Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,… and another G is 0.A, 0.AA, 0.AAA,.… Now it is true that all of Gs have a F below them and that F never crosses “over” above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens “to the right” in the graph. There is a possibly surprising “taking of limit” which might not be totally natural.

**constustruction of wedges that don’t break limit**

It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it’s asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such “wedgings” to have a different limit. But given some series that has 1 as limit it’s always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing.

**Rate mismatch between accuracy and digits**

If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won’t be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to guarantee a common demoninator is to multiply all different demoniators together. This means that a fraction in a decimal number multiplied by 11 and a fraction in undecimal multiplied by 10 will have the same denominators. This means that 0.99999999999 and 0.AAAAAAAAAA are of same precision and have the same value but one has 11 digits and the other has 10. If we go by pure digits to digits comparison we end up comparing two 11 digit numbers when the equal value is expressed by a 10 and 11 digit numbers. At this level of accuracy it’s fair to give decimals 11 digits and undecimals 10 digits. If we go blindly by digit numbers we are unfair to the amount of digits available for the level of accuracy demanded. Sure for most level of accuracy there is no nice natural number of digits that would be fair to both at the same time.

**Graphical rate mismatch**

One can highlight the rate mismatch in graphical terms too. Have a nice x=y graph and then have a decimal scale and undecimal salce on the x axis. Mark every point of the x=y that corresponds to a scale mark on both scales. Comparing digit to digit corresponds to firt going to 9/10th marker on decimal scale and 10/11th mark on the undecimal scale and then going 9th subdivison on the decimal scale and 10th subdivision on the undecimal scale. If we step so it’s true that on each step the undecimal “resting place” is to the right and up to the decimal resting place. But it should also be clear that each time we take a step we keep within the original compartment and we end up in the high part of the orginal department and that right side of the comparment will always be limited by (x=1,y=1). By every 11 decimal steps we land in a location that was landed in by the undecimal series and by every 10 undecimal steps we land in a location that will be visited by the decimal steps. This gives a nice interpretation for having a finite number of digits. What you do when you want to take infinite steps? One way is to say you can’t take infinite steps but you can talk about the limit of the finite steps. For every real number less than 1 both steppings will at some finite step cross over that number. 1 is the first real number for which this doesn’t happen. Thus 1 is the “destination of infinite steps”.

0.999… doesn’t map into 1 divided by {{0,1,2,3,4,5...}|} (=epsilon)

However those that disagree are obviously thinking something akin to 1-epsilon which is not equal to 1. However you can’t refer to it with a decimal system (atleast the standard one). Arguments that refer to (spesific) decimal places are therefore inapplicable. Reals are archimedian but surreals are not. For surreals there are elements a,b a>b so taht there is no N so that a<b*N (arbitararily large finite multiples of b are not guaranteed to bypass a).

For every epsilon greater than zero, the difference 1-0.99999999… is even smaller. Smaller than any positive number.

Then, if it’s not negative, then it’s zero. This difference is zero.

This is the most correct way to put it, I believe.

Yes. Still this is the concept of limits and it is a significant step for most people. I think the most common first reaction is “Huh?”.

But people will make the effort if you explain this is a solution to the mysteriousness of “infinitesimals”.

This argument more or less assumes its conclusion; after all, if it weren’t the case that 1 − 0.999… were zero, then it would be some positive number x, so you could pick epsilon = x.

And in certain constructions, epsilon is a distinct number—so it’s actually fallacious without going back to the definitions!

No, it does not!

Whatever epsilon you might choose, you can easily take enough 9s (nines) after the 0. - to have the difference smaller than this epsilon of yours.

Again, that’s assuming the conclusion; what if 1 − 0.999… weren’t zero, and you picked that as epsilon? You’re skipping steps. It’s worth writing down exactly what you think is happening more carefully.

(To be clear, I’m not claiming that you’ve asserted any false statements, but I think there’s an important sense in which you aren’t taking seriously the hypothetical world in which 1 − 0.999… isn’t zero, and what that world might look like. There’s something to learn from doing this, I think.)

If I may, let me agree with you in dialogue form:

Alice:1 = 0.999...Bob:No, they’re different.Alice:Okay, if they’re different then why do you get zero if you subtract one from the other?Bob:You don’t, you get 0.000...0001.Alice:How many zeros are there?Bob:An infinite number of them. Then after the last zero, there’s a one.Alice is right (as far as real numbers go) but at this point in the discussion she has not yet proved her case; she needs to argue to Bob that he shouldn’t use the concept “the last thing in an infinite sequence” (or that if he does use it he needs to define it more rigorously).

There is no “after the last” zero.

In this (math) world it is zero only because for every nonzero positive epsilon, you can pick a FINITE number of 9s, such that 1-0.999999...99999 (a FINITE number of 9s) is already SMALLER than that epsilon.

For EVERY real number greater than zero, you have a FINITE number of 9s, such that this difference is smaller.

Therefore the difference cannot by a number greater then 0.

[Note, after rereading your post my comment is tangential]

I have always been empathetic to the argument, from people first presented with this, that they are different. Understanding how math deals with infinity basically requires having the mathematical structure supporting it already known. I’m not particularly gifted at math, but the first 4 weeks of real analysis really changed the way I think, because it was basically a condensed rapid upload of centuries of collaborative work from some of the smartest men to ever exist right into my brain.

Otherwise, at least in my experience, we operate in a discrete world that moves through time. So, what I predict is happening, is that when you ask that question to people their best approximation is a discrete world ticking through time.

Is 0.999...=1? Well, each tick of time another set of [0.0...9]’s is added, when the question is finally answered the time stops. You’re then left with some finite number [0.0..01]. In their mind it’s a discrete algo running through time.

The reality that it’s a limit that operates absent of time, instantaneously, is hard to grasp, because it took brilliant men centuries to figure out this profoundly unintuitive result. We understand it because we learned it.

Its a simple argument that tries ot be rigorous. If I don¨t agree with it I must disagree with some part of it. When I go step by step over it there is a suspicious step.

The proof assumes/states that 9.99999… −0.999999 = 9. I am unconfident with operations on infinite decimal place decimals that I am not sure that I agree. 9.99999… −0.9999 could also be 8.00...009. In particular I don’t know whether you get the same object if you mulitiply 0.9999… by ten or if you set the first zero equal to 9.

Understanding to agree with how the proof handles is to be proficient on what reals are and the technicalities and to understand that reals are what is meant.

Havinga standard that things are not real if they can’t be realised in reals would make i and complex numbers to be “unintelligble”.

You left out a possibility; true dependent on something outside your realm of knowledge. In this case, it’s true for real numbers, but false for surreal numbers.

No, because it’s not a possibility that when you thought you were doing math in the reals this whole time, you were actually doing math in the surreals. Using a system other than the normal one would need to be stated explicitly.