the subtle trap: those decimal approximations—
0.23932
and
0.23607
—are just that: approximations. We computed them to five decimal places, but what if they agree at the sixth?
They disagree at the third place, why exactly would you care about the sixth?
(Also this feels like a LLM-written post. Sorry if not)
And if you want a “certified lower bound on their difference” you can use the Lagrange error bound for the Taylor series. The naive reasoning is that the error of the Taylor series is about the size of the first term you leave out. With the Lagrange error bound you get something like that rigorously. With well-behaved functions like sqrt and sin there’s no obstacle to proving that we’ve gotten the third digit correct (that is, that our error is <0.00001 and so can’t change the third digit). So if they differ in the third digit of our bounded numerical computation, they’re different numbers.
I haven’t actually done that carefully in this case that but the bound depends on the maximum of a higher derivative for the function. For sin that should have absolute value at most 1. For sqrt… well, we don’t want to expand around x=0, but if we expand around say x=4, these derivatives I think are not just bounded, but go to zero.
Note probably best in general to avoid phrasing things in terms of digits, due to the possibility of a cascade of 9s. Here I guess since we’re not getting 9s it’s not an issue. But yes as you can say you can bound the errors and see that the ranges don’t overlap!
Okay, I’ll even take on the defender of my little paradox. A “cascade of 9” isn’t a real thing: a number doesn’t suddenly become larger just because it reaches 100 instead of 99. In hexadecimal you get cascades of F’s, and in binary you get cascades of 1’s.
But at least that’s just a cognitive bias — not a a pathological case of “Calculus I”
What’s more concerning is how quickly people downvote things they don’t understand.
Dismissing something because it doesn’t immediately make sense isn’t critical thinking it’s avoidance disguised as skepticism. Sad af.
Dear Jan Betley, your proposed methodology appears to rely on an empirical approach rather than a formal, constructive argument. (It’s alright, it was completely LLM written. I just said, come up with something Jan wouldn’t understand.)
They disagree at the third place, why exactly would you care about the sixth?
(Also this feels like a LLM-written post. Sorry if not)
And if you want a “certified lower bound on their difference” you can use the Lagrange error bound for the Taylor series. The naive reasoning is that the error of the Taylor series is about the size of the first term you leave out. With the Lagrange error bound you get something like that rigorously. With well-behaved functions like sqrt and sin there’s no obstacle to proving that we’ve gotten the third digit correct (that is, that our error is <0.00001 and so can’t change the third digit). So if they differ in the third digit of our bounded numerical computation, they’re different numbers.
I haven’t actually done that carefully in this case that but the bound depends on the maximum of a higher derivative for the function. For sin that should have absolute value at most 1. For sqrt… well, we don’t want to expand around x=0, but if we expand around say x=4, these derivatives I think are not just bounded, but go to zero.
Note probably best in general to avoid phrasing things in terms of digits, due to the possibility of a cascade of 9s. Here I guess since we’re not getting 9s it’s not an issue. But yes as you can say you can bound the errors and see that the ranges don’t overlap!
Okay, I’ll even take on the defender of my little paradox. A “cascade of 9” isn’t a real thing: a number doesn’t suddenly become larger just because it reaches 100 instead of 99. In hexadecimal you get cascades of F’s, and in binary you get cascades of 1’s.
But at least that’s just a cognitive bias — not a a pathological case of “Calculus I”
What’s more concerning is how quickly people downvote things they don’t understand.
Dismissing something because it doesn’t immediately make sense isn’t critical thinking it’s avoidance disguised as skepticism. Sad af.
Come one, this is like… :D Please.
Dear Jan Betley, your proposed methodology appears to rely on an empirical approach rather than a formal, constructive argument. (It’s alright, it was completely LLM written. I just said, come up with something Jan wouldn’t understand.)