The Riemann Zeta function ζ(s) negates s for reasons beyond me
cosine seems more primitive than sine
The gravitational constant looks like off by a factor of 4π
π seems like half the size it should be
After years of confusion, I was finally vindicated about π. That π is not 6.2831853071… is mostly a historical accident. Am I “right” about these other definitions being “wrong”? What are other mathematical entities are defined in ugly ways for historical reasons?
The current definition of the gravitational constant maximizes the simplicity of Newton’s law F = Gmm’/r^2. Adding a 4π to its definition would maximize the simplicity of the Poisson equation that Metus wrote. Adding instead 8π, on the other hand, would maximize the simplicity of Einstein’s field equations. No matter what you do, some equation will look a bit more complicated.
The current definition of the gravitational constant maximizes the simplicity of Newton’s law F = Gmm’/r^2.
Absolutely, and Planck’s constant maximizes the simplicity of finding the energy of a photon from its wavelength, and π maximally simplifies finding the circumference of of a circle from its diameter. But in all those cases, it feels to me like we’re simplifying the wrong equation.
ETA: To be explicit, it feels like there should be a 4π in Newton’s law. The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.
π maximally simplifies finding the circumference of of a circle from its diameter
More importantly, π is the area of the unit circle. If you’re talking about angles you want τ (tau), if you’re talking about area you want π. And you always want pie, ha ha.
The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.
Saying that this is what the formula intrinsically does, amounts to saying that field lines are more fundamental/”real” than action-at-distance forces on point particles. But in the context of purely Newtonian gravity, both formulations are in fact completely equivalent. (And if you appeal to relativity to justify considering fields more fundamental, then why not better go for simplifying Einstein’s equation and including 8π in G?)
Saying that this is what the formula intrinsically does, amounts to saying that field lines are more fundamental/”real” than action-at-distance forces on point particles.
Yep :-). I don’t know enough the physics to back that up, but that’s what my gut tells me. A more educated version of me might be able to say something “the vocabulary of forces is ‘shallow’; the vocabulary of fields is deeper; the vocabulary of group symmetries is deeper still.” I certainly do not have the depth of understanding to make that sort of statement with any authority. If you know enough physics to correct me or clarify, please please do.
why not better go for simplifying Einstein’s equation and including 8π in G
If somebody who groks relativity told me that this is the right thing to do, I would believe them (ETA mentioned on Wikipedia). I’d be curious where the factor of 2 comes from in the Newtonian approximation.
I’d be curious where the factor of 2 comes from in the Newtonian approximation.
I can take a stab at explaining this. Both the Poisson equation and the Einstein equation have the general form
2nd order differential operator acting on some quantity F = Constant * Matter source
In the Newtonian case, F is the gravitational potential. In the Einstein case, it is the spacetime metric. This is a quantity with a simple, natural, purely “mathematical” definition that you cannot play with and change redifining constants; it measures the distance between events on a four-dimensional curved spacetime. “Matter source” in the Poisson equation stands for mass density, and in the Einstein equation it stands for a more complicated entitity that reduces to exactly mass density in the limit when Newtonian physics holds. So the ratio of the constants in each equation is determined by the ratio of how “spacetime metric” and “gravitational potential” are related in the Newtonian limit of GR.
In Newtonian physics, the gravitational potential is that whose first derivatives give the acceleration of a test particle:
gradient of potential = acceleration of particles
This is considered a phyiscs law combining both Newton’s law of gravity and Newton’s second law of motion. In GR, the spacetime metric also has the (purely mathematical) property that (in the limit where velocities are much smaller than the speed of light, and departures from flat space are small) its gradient is proportional, with a factor 2, to the acceleration of geodesic (minimum length) trajectories in spacetime:
gradient of metric = 2 acceleration of geodesics
So if we make the physical assumption that test particles in a gravitational field follow geodesics, then we can recover Newtonian gravity from GR. (The whole reason why this is possible is the equivalence principle, the observation that all forms of matter respond to gravity in the same way.) Since small perturbations to a flat metric have to be identified with twice the Newtonian potential, this is where the extra 2 in the Einstein equation comes from.
Wow, you have a good point. I always use the concept of surface area (and considering spheres of equal total force) to remember why the r on the bottom is squared. Putting the surface area into the formula is like replacing a factor that raises questions with the answer to those questions.
These are highly useful properties in some contexts. That said, cos(0) being the unit prior to any rotation is also nice. (But the definition of a rotation in Cartesian coordinates contains exactly as many sines as cosines; and that generalizes to 3 dimensions where complex numbers do not.)
What is your complaint about Zeta? That it is the sum of n^-s, rather than the sum of n^s? It’s the one that converges. Or are you bothered that zeta(-3) is rational, while zeta(3) is irrational?
A function that fills in the gaps between factorials seems useful.
Maybe useful for some purposes. Maybe that would be a good function to have when defining the Beta distribution, though there are other reasons for the normalization there.
But in the context of the Riemann Zeta function (which is the context you have suggested), that is not at all the purpose of the Gamma function. Its role is as the Mellin transform of the exponential function. The Zeta function itself is a Mellin tranform and two interact well because of their common origin. Of course, that pushes back the question to why the Mellin tranform has a −1. What it really has is a dx/x. This measure is invariant under scaling, just as dx is invariant under translation. Indeed, the measures correspond under the exponential change of variables.
(In fact, that is closely related to a justification for the normalization of the Beta distribution. B(0,0) is the measure invariant under logistic transformations; B(p,q) is the posterior after seeing p,q observations.)
As I said, the Riemann zeta function has its definition because it makes sense and the other doesn’t. Once you have a solid definition of ζ(-1), you could declare that 1+2+3+...=ζ(-1) and then you might be tempted to reverse the sign. But the zeta function was around for a century before Riemann encouraged people to emphasize the values that don’t make immediate sense.
You can do an awful lot just having it defined for real s>1. Euler used it to prove the infinitude of primes: ζ(1) is the harmonic series, thus infinite (or more precisely, an infinite limit as s approaches 1), but prime factorization expresses it as an product over primes, so there must be infinitely many primes to make it blow up. Moreover, this gives a better estimate of the density of the primes than Euclid’s proof. Then Dirichlet used it and related functions to prove that there are infinitely many primes satisfying reasonable congruences. (Exercise: use Euler’s technique to prove that there are infinitely many primes congruent to 1 mod 4 and infinitely many congruent to 3 mod 4.)
I have been pissed off for years at the existance of h-bar and h as separate constants, where almost everywhere h-bar should be the basic constant. IIRC, this is just because the first time either was derived, it happened to be h, so that got called the quantum mechanical constant.
Yeah, this is completely historical. Edwards in his book on the Riemann zeta function tries to go back to using Gamma normalized in the obvious way so it agrees with factorial but that’s never caught on.
In the case of the Riemann zeta function the key issue is that it is seen as more natural to have a plane of convegence for positive values of s. Moreover, writing it in that way, the values at positive integers are then natural and easy series.
It’s a minor quibble. I think of cosine as the real part of e^ix, which is a very simple concept in my head. sine is the imaginary part of e^ix divided by i, which is slightly more complicated. If you had to relegate one to co- status, I’d choose sine.
Describing sine and cosine this way, instead of in terms of triangles, suddenly makes their behavior feel much more intuitive to me; on par with the way complex numbers in general suddenly made sense when someone here described multiplication by i as a rotation.
More elementarily: cos x and sin x are the x and y coordinates of the point on the unit circle at an angle x anticlockwise from the positive x-axis. (I think this is the correct version of the “triangles” definitions.)
If you don’t rotate, the cosine is still there; only the sine is zero. So, in some sense, cosine is more fundamental; it was there before the rotation.
In my opinion the gamma function is by far the stupidest. IME, the off-by-one literally never makes equations clearer; it only obfuscates the relationship between continuous and discrete things (etc.) by adding in an annoying extra step that trips up your intuition. Seems like simple coordination failure.
I used to think that the way mathematicians did things was forced to be the best way we could do because of the requirements to do things properly in order to advance in maths. But then the Tau Manifesto showed me I was wrong.
I think you’re right about cosine. I think sine seemed simpler when it was named back in classical times, but then when complex numbers were discovered and their relationships to the trigonometric functions was discovered, cosine turned out to be simpler.
Here’s one I come across as a programmer: which number is better for starting indexing and counting things with? Zero or one? Zero is so much better for calculating with relative indexes, you have less off-by-one errors. In maths, the default convention is to number things starting at one. But when working with serieses (arithmetic series, discrete fourier transforms, maclaurin series e.g. the polynomial that equals e) the convention is to start at zero.
(I am currently conceptually wrestling with the Gamma function myself, and your question either has an obvious superficial answer, or I’ve misunderstood it. Probably the latter.)
A function that fills in the gaps between factorials seems useful. There is a simple formula which does just that.
That’s almost what the Gamma function does, but not quite. The Gamma function calculates this:
Here’s its common integral form:
It’s like you’re asking for the factorial of n, and the Gamma function says “Hey, I’ll do you a favor and calculate the factorial of n − 1”, and you’re like “No, that’s okay Gamma function, I really just want the factorial of n”, and the Gamma function is like “No, I want to do this for you”, and then it takes your number, subtracts one, computes the factorial, and hands you back the factorial of n − 1, all beaming and proud of itself for putting in the extra effort, and you have to smile and say thank you.
That one you found out already, it would make it much more consistent with how similar constants are used.
The gravitational constant looks like off by a factor of 4π
Not sure what you mean. Do you mean when comparing the equation for gravitational force to the electric force? Or do you mean when looking at the ‘intuitive’ way of writing the differential equation
nablag=rho?
In either case it seems that the choice of 4π is arbitrary on one equation or the other. For example choosing Gaussian units introduces a 4π in the electrical equation and makes it look more like the gravitational equation.
cosine seems more primitive than sine
They seem equally primitive by
sin2xcos2x=1
and
%20=%20cos(x%20-%20\pi/2))
The Riemann Zeta function ζ(s) negates s for reasons beyond me
I agree about gamma, cosine, and pi. I’m not troubled by the minus sign in the zeta function but suspect we should really be working with the related “xi function” whose symmetries are simpler. I’m not a very expert physicist but my guess is that the 4pi there is going to pop in one place or another and it doesn’t matter very much which you choose.
The only one of these that I actually get cross about is the gamma function. With all the others, there are tradeoffs—e.g., if you work with tau = 2pi instead of with pi, some things become simpler, some become more complicated, and on balance it’s probably a slight improvement. If you work with the factorial function instead of the gamma function, I think pretty much every formula I’ve ever seen that uses it becomes simpler (usually by the omission of an annoying “-1” term).
(But I’m not an analytic number theorist or a complex analyst—though I was kinda-sorta a bit of a complex analyst once—and it’s possible that the cognoscenti know of good reasons why gamma should stay the way it is.)
I would say cos is simpler than sin because its Taylor series has a factor of x knocked off.
In practice they tend to show up together, though. Often you can replace the pair with something like e^(i x), so maybe that should be considered the simplest.
A lot of math and physics definitions feel like they have weird dross. Examples:
The Gamma function has this −1 I don’t understand
The Riemann Zeta function ζ(s) negates s for reasons beyond me
cosine seems more primitive than sine
The gravitational constant looks like off by a factor of 4π
π seems like half the size it should be
After years of confusion, I was finally vindicated about π. That π is not 6.2831853071… is mostly a historical accident. Am I “right” about these other definitions being “wrong”? What are other mathematical entities are defined in ugly ways for historical reasons?
The current definition of the gravitational constant maximizes the simplicity of Newton’s law F = Gmm’/r^2. Adding a 4π to its definition would maximize the simplicity of the Poisson equation that Metus wrote. Adding instead 8π, on the other hand, would maximize the simplicity of Einstein’s field equations. No matter what you do, some equation will look a bit more complicated.
Absolutely, and Planck’s constant maximizes the simplicity of finding the energy of a photon from its wavelength, and π maximally simplifies finding the circumference of of a circle from its diameter. But in all those cases, it feels to me like we’re simplifying the wrong equation.
ETA: To be explicit, it feels like there should be a 4π in Newton’s law. The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.
More importantly, π is the area of the unit circle. If you’re talking about angles you want τ (tau), if you’re talking about area you want π. And you always want pie, ha ha.
Saying that this is what the formula intrinsically does, amounts to saying that field lines are more fundamental/”real” than action-at-distance forces on point particles. But in the context of purely Newtonian gravity, both formulations are in fact completely equivalent. (And if you appeal to relativity to justify considering fields more fundamental, then why not better go for simplifying Einstein’s equation and including 8π in G?)
Yep :-). I don’t know enough the physics to back that up, but that’s what my gut tells me. A more educated version of me might be able to say something “the vocabulary of forces is ‘shallow’; the vocabulary of fields is deeper; the vocabulary of group symmetries is deeper still.” I certainly do not have the depth of understanding to make that sort of statement with any authority. If you know enough physics to correct me or clarify, please please do.
If somebody who groks relativity told me that this is the right thing to do, I would believe them (ETA mentioned on Wikipedia). I’d be curious where the factor of 2 comes from in the Newtonian approximation.
I can take a stab at explaining this. Both the Poisson equation and the Einstein equation have the general form
2nd order differential operator acting on some quantity F = Constant * Matter source
In the Newtonian case, F is the gravitational potential. In the Einstein case, it is the spacetime metric. This is a quantity with a simple, natural, purely “mathematical” definition that you cannot play with and change redifining constants; it measures the distance between events on a four-dimensional curved spacetime. “Matter source” in the Poisson equation stands for mass density, and in the Einstein equation it stands for a more complicated entitity that reduces to exactly mass density in the limit when Newtonian physics holds. So the ratio of the constants in each equation is determined by the ratio of how “spacetime metric” and “gravitational potential” are related in the Newtonian limit of GR.
In Newtonian physics, the gravitational potential is that whose first derivatives give the acceleration of a test particle:
gradient of potential = acceleration of particles
This is considered a phyiscs law combining both Newton’s law of gravity and Newton’s second law of motion. In GR, the spacetime metric also has the (purely mathematical) property that (in the limit where velocities are much smaller than the speed of light, and departures from flat space are small) its gradient is proportional, with a factor 2, to the acceleration of geodesic (minimum length) trajectories in spacetime:
gradient of metric = 2 acceleration of geodesics
So if we make the physical assumption that test particles in a gravitational field follow geodesics, then we can recover Newtonian gravity from GR. (The whole reason why this is possible is the equivalence principle, the observation that all forms of matter respond to gravity in the same way.) Since small perturbations to a flat metric have to be identified with twice the Newtonian potential, this is where the extra 2 in the Einstein equation comes from.
Wow, you have a good point. I always use the concept of surface area (and considering spheres of equal total force) to remember why the r on the bottom is squared. Putting the surface area into the formula is like replacing a factor that raises questions with the answer to those questions.
I’d like to put in a word for sine:
These are highly useful properties in some contexts. That said, cos(0) being the unit prior to any rotation is also nice. (But the definition of a rotation in Cartesian coordinates contains exactly as many sines as cosines; and that generalizes to 3 dimensions where complex numbers do not.)
What is your complaint about Zeta? That it is the sum of n^-s, rather than the sum of n^s? It’s the one that converges. Or are you bothered that zeta(-3) is rational, while zeta(3) is irrational?
Maybe useful for some purposes. Maybe that would be a good function to have when defining the Beta distribution, though there are other reasons for the normalization there.
But in the context of the Riemann Zeta function (which is the context you have suggested), that is not at all the purpose of the Gamma function. Its role is as the Mellin transform of the exponential function. The Zeta function itself is a Mellin tranform and two interact well because of their common origin. Of course, that pushes back the question to why the Mellin tranform has a −1. What it really has is a dx/x. This measure is invariant under scaling, just as dx is invariant under translation. Indeed, the measures correspond under the exponential change of variables.
(In fact, that is closely related to a justification for the normalization of the Beta distribution. B(0,0) is the measure invariant under logistic transformations; B(p,q) is the posterior after seeing p,q observations.)
Great answer, thanks.
Yes, my shallow, uninformed by higher maths complaint about the zeta function is that it sums n^-s instead of the simpler n^s.
As I said, the Riemann zeta function has its definition because it makes sense and the other doesn’t. Once you have a solid definition of ζ(-1), you could declare that 1+2+3+...=ζ(-1) and then you might be tempted to reverse the sign. But the zeta function was around for a century before Riemann encouraged people to emphasize the values that don’t make immediate sense.
You can do an awful lot just having it defined for real s>1. Euler used it to prove the infinitude of primes: ζ(1) is the harmonic series, thus infinite (or more precisely, an infinite limit as s approaches 1), but prime factorization expresses it as an product over primes, so there must be infinitely many primes to make it blow up. Moreover, this gives a better estimate of the density of the primes than Euclid’s proof. Then Dirichlet used it and related functions to prove that there are infinitely many primes satisfying reasonable congruences. (Exercise: use Euler’s technique to prove that there are infinitely many primes congruent to 1 mod 4 and infinitely many congruent to 3 mod 4.)
I have been pissed off for years at the existance of h-bar and h as separate constants, where almost everywhere h-bar should be the basic constant. IIRC, this is just because the first time either was derived, it happened to be h, so that got called the quantum mechanical constant.
Yeah, this is completely historical. Edwards in his book on the Riemann zeta function tries to go back to using Gamma normalized in the obvious way so it agrees with factorial but that’s never caught on.
In the case of the Riemann zeta function the key issue is that it is seen as more natural to have a plane of convegence for positive values of s. Moreover, writing it in that way, the values at positive integers are then natural and easy series.
Can you expand on this one?
It’s a minor quibble. I think of cosine as the real part of e^ix, which is a very simple concept in my head. sine is the imaginary part of e^ix divided by i, which is slightly more complicated. If you had to relegate one to co- status, I’d choose sine.
Describing sine and cosine this way, instead of in terms of triangles, suddenly makes their behavior feel much more intuitive to me; on par with the way complex numbers in general suddenly made sense when someone here described multiplication by i as a rotation.
Thanks.
More elementarily: cos x and sin x are the x and y coordinates of the point on the unit circle at an angle x anticlockwise from the positive x-axis. (I think this is the correct version of the “triangles” definitions.)
If you don’t rotate, the cosine is still there; only the sine is zero. So, in some sense, cosine is more fundamental; it was there before the rotation.
In my opinion the gamma function is by far the stupidest. IME, the off-by-one literally never makes equations clearer; it only obfuscates the relationship between continuous and discrete things (etc.) by adding in an annoying extra step that trips up your intuition. Seems like simple coordination failure.
I used to think that the way mathematicians did things was forced to be the best way we could do because of the requirements to do things properly in order to advance in maths. But then the Tau Manifesto showed me I was wrong.
I think you’re right about cosine. I think sine seemed simpler when it was named back in classical times, but then when complex numbers were discovered and their relationships to the trigonometric functions was discovered, cosine turned out to be simpler.
Here’s one I come across as a programmer: which number is better for starting indexing and counting things with? Zero or one? Zero is so much better for calculating with relative indexes, you have less off-by-one errors. In maths, the default convention is to number things starting at one. But when working with serieses (arithmetic series, discrete fourier transforms, maclaurin series e.g. the polynomial that equals e) the convention is to start at zero.
Can you elaborate on the Gamma function thing?
(I am currently conceptually wrestling with the Gamma function myself, and your question either has an obvious superficial answer, or I’ve misunderstood it. Probably the latter.)
A function that fills in the gaps between factorials seems useful. There is a simple formula which does just that.
That’s almost what the Gamma function does, but not quite. The Gamma function calculates this:
Here’s its common integral form:
It’s like you’re asking for the factorial of n, and the Gamma function says “Hey, I’ll do you a favor and calculate the factorial of n − 1”, and you’re like “No, that’s okay Gamma function, I really just want the factorial of n”, and the Gamma function is like “No, I want to do this for you”, and then it takes your number, subtracts one, computes the factorial, and hands you back the factorial of n − 1, all beaming and proud of itself for putting in the extra effort, and you have to smile and say thank you.
OK, yeah, it was the latter. I see where you’re coming from now.
That one you found out already, it would make it much more consistent with how similar constants are used.
Not sure what you mean. Do you mean when comparing the equation for gravitational force to the electric force? Or do you mean when looking at the ‘intuitive’ way of writing the differential equation
nablag=rho?
In either case it seems that the choice of 4π is arbitrary on one equation or the other. For example choosing Gaussian units introduces a 4π in the electrical equation and makes it look more like the gravitational equation.
They seem equally primitive by
sin2x cos2x=1
and
%20=%20cos(x%20-%20\pi/2))It doesn’t according to Wikipedia
I haven’t read up on that so I don’t really know. Seems arbitrary to me too.
I agree about gamma, cosine, and pi. I’m not troubled by the minus sign in the zeta function but suspect we should really be working with the related “xi function” whose symmetries are simpler. I’m not a very expert physicist but my guess is that the 4pi there is going to pop in one place or another and it doesn’t matter very much which you choose.
The only one of these that I actually get cross about is the gamma function. With all the others, there are tradeoffs—e.g., if you work with tau = 2pi instead of with pi, some things become simpler, some become more complicated, and on balance it’s probably a slight improvement. If you work with the factorial function instead of the gamma function, I think pretty much every formula I’ve ever seen that uses it becomes simpler (usually by the omission of an annoying “-1” term).
(But I’m not an analytic number theorist or a complex analyst—though I was kinda-sorta a bit of a complex analyst once—and it’s possible that the cognoscenti know of good reasons why gamma should stay the way it is.)
I would say
cos
is simpler thansin
because its Taylor series has a factor of x knocked off.In practice they tend to show up together, though. Often you can replace the pair with something like
e^(i x)
, so maybe that should be considered the simplest.