I might sound a bit daft here, but do theoretical physicists actually understand what they’re talking about? My main concern when trying to learn is it feels like every term is defined with ten other terms, and when you finally get to the root of it the foundations seem pretty shaky. For example, the spin-statistics theorem says particles with half-integer spins are fermions, and full-integer spins are bosons, and is proved starting from a few key postulates:
Positive energy
Unique vacuum state
Lorentz invariance
“Locality/Causality” (all fields commute or anti-commute).
The fractional quantum Hall effect breaks Lorentz invariance (1+1D universe instead of Lorentz’ 3+1D), which is why we see anyons, so obviously the spin-statistics theorem doesn’t always hold. However, the fourth postulate shows up everywhere in theoretical physics and the only justification really given is that, “all the particles we see seem to commute or anti-commute”… which is the entire point the spin-statistics theorem is trying to prove.
Physics is incredibly precise (only pure mathematics is more rigorous) and everything does indeed follows from a small number of basic principles. These principles are not sacrosent though, more like useful assumptions, and in quantum gravity we don’t expect locality to strictly holds. The solution of the black hole information paradox mentioned above is precisely showing that the late Hawking radiation (which is very far from the black hole) is the same quantum system as the black hole interior. This is a manifestion of quantum entanglement which is realized in gravity through a connected geometry (wormhole). So the solution of the Hawking information paradox is that the information thrown into a black hole is not lost after evaporation, it has escaped into Hawking radiation through tiny quantum wormholes. These non-local effects only happen at the Planck scale or in comlplicated systems such as old black holes.
About spin-statistics, the modern understanding is that is just follows from the representation theory of the Lorentz group. The existence of fermions is due to the fact that SO(3,1) is not simply connected so what gets represented is its universal cover (a double cover) so there exists fermionic representations with anti-commuting properties. Their properties are fixed from the requirement that they must form consistent unitary representations (in order to have a quantum theory giving positive probabilities). As you mention, in lower dimensions, anyons exist, and this is due to the fact that the lower-dimensional Lorentz groups have different properties, their universal cover can be an infinite cover so there are anyonic representations too.
I think you unfortunately can’t really verify the recent epistemic health of theoretical physics, without knowing much theoretical physics, by tracing theorems back to axioms. This is impossible to do even in math (can I, as a relative layperson, formalize and check the recent Langlands Program breakthrough in LEAN?) and physics is even less based on axioms than math is.
(“Even less” bc even math is not really based on mutually-agreed-upon axioms in a naive sense, cf. Proofs and Refutations or the endless squabbling over foundations.)
Possibly you can’t externally verify the epistemic health of theoretical physics at all, post-70s, given the “out of low hanging empirical fruit” issue and the level of prerequisites needed to remotely begin to learn anything beyond QFT.
Speaking as a (former) theoretical physicist: trust us. We know what we’re talking about ;)
I might sound a bit daft here, but do theoretical physicists actually understand what they’re talking about? My main concern when trying to learn is it feels like every term is defined with ten other terms, and when you finally get to the root of it the foundations seem pretty shaky. For example, the spin-statistics theorem says particles with half-integer spins are fermions, and full-integer spins are bosons, and is proved starting from a few key postulates:
Positive energy
Unique vacuum state
Lorentz invariance
“Locality/Causality” (all fields commute or anti-commute).
The fractional quantum Hall effect breaks Lorentz invariance (1+1D universe instead of Lorentz’ 3+1D), which is why we see anyons, so obviously the spin-statistics theorem doesn’t always hold. However, the fourth postulate shows up everywhere in theoretical physics and the only justification really given is that, “all the particles we see seem to commute or anti-commute”… which is the entire point the spin-statistics theorem is trying to prove.
Physics is incredibly precise (only pure mathematics is more rigorous) and everything does indeed follows from a small number of basic principles. These principles are not sacrosent though, more like useful assumptions, and in quantum gravity we don’t expect locality to strictly holds. The solution of the black hole information paradox mentioned above is precisely showing that the late Hawking radiation (which is very far from the black hole) is the same quantum system as the black hole interior. This is a manifestion of quantum entanglement which is realized in gravity through a connected geometry (wormhole). So the solution of the Hawking information paradox is that the information thrown into a black hole is not lost after evaporation, it has escaped into Hawking radiation through tiny quantum wormholes. These non-local effects only happen at the Planck scale or in comlplicated systems such as old black holes.
About spin-statistics, the modern understanding is that is just follows from the representation theory of the Lorentz group. The existence of fermions is due to the fact that SO(3,1) is not simply connected so what gets represented is its universal cover (a double cover) so there exists fermionic representations with anti-commuting properties. Their properties are fixed from the requirement that they must form consistent unitary representations (in order to have a quantum theory giving positive probabilities). As you mention, in lower dimensions, anyons exist, and this is due to the fact that the lower-dimensional Lorentz groups have different properties, their universal cover can be an infinite cover so there are anyonic representations too.
I think you unfortunately can’t really verify the recent epistemic health of theoretical physics, without knowing much theoretical physics, by tracing theorems back to axioms. This is impossible to do even in math (can I, as a relative layperson, formalize and check the recent Langlands Program breakthrough in LEAN?) and physics is even less based on axioms than math is.
(“Even less” bc even math is not really based on mutually-agreed-upon axioms in a naive sense, cf. Proofs and Refutations or the endless squabbling over foundations.)
Possibly you can’t externally verify the epistemic health of theoretical physics at all, post-70s, given the “out of low hanging empirical fruit” issue and the level of prerequisites needed to remotely begin to learn anything beyond QFT.
Speaking as a (former) theoretical physicist: trust us. We know what we’re talking about ;)