the subtle trap: those decimal approximations—
0.23932
and
0.23607
—are just that: approximations. We computed them to five decimal places, but what if they agree at the sixth?
They disagree at the third place, why exactly would you care about the sixth?
(Also this feels like a LLM-written post. Sorry if not)
And if you want a “certified lower bound on their difference” you can use the Lagrange error bound for the Taylor series. The naive reasoning is that the error of the Taylor series is about the size of the first term you leave out. With the Lagrange error bound you get something like that rigorously. With well-behaved functions like sqrt and sin there’s no obstacle to proving that we’ve gotten the third digit correct (that is, that our error is <0.00001 and so can’t change the third digit). So if they differ in the third digit of our bounded numerical computation, they’re different numbers.
I haven’t actually done that carefully in this case that but the bound depends on the maximum of a higher derivative for the function. For sin that should have absolute value at most 1. For sqrt… well, we don’t want to expand around x=0, but if we expand around say x=4, these derivatives I think are not just bounded, but go to zero.
Note probably best in general to avoid phrasing things in terms of digits, due to the possibility of a cascade of 9s. Here I guess since we’re not getting 9s it’s not an issue. But yes as you can say you can bound the errors and see that the ranges don’t overlap!
They disagree at the third place, why exactly would you care about the sixth?
(Also this feels like a LLM-written post. Sorry if not)
And if you want a “certified lower bound on their difference” you can use the Lagrange error bound for the Taylor series. The naive reasoning is that the error of the Taylor series is about the size of the first term you leave out. With the Lagrange error bound you get something like that rigorously. With well-behaved functions like sqrt and sin there’s no obstacle to proving that we’ve gotten the third digit correct (that is, that our error is <0.00001 and so can’t change the third digit). So if they differ in the third digit of our bounded numerical computation, they’re different numbers.
I haven’t actually done that carefully in this case that but the bound depends on the maximum of a higher derivative for the function. For sin that should have absolute value at most 1. For sqrt… well, we don’t want to expand around x=0, but if we expand around say x=4, these derivatives I think are not just bounded, but go to zero.
Note probably best in general to avoid phrasing things in terms of digits, due to the possibility of a cascade of 9s. Here I guess since we’re not getting 9s it’s not an issue. But yes as you can say you can bound the errors and see that the ranges don’t overlap!
Come one, this is like… :D Please.