How seriously should I take the supposed problems with Cox’s theorem?

I had been under the impression that Cox’s theorem said something pretty strong about the consistent ways to represent uncertainty, relying on very plausible assumptions. However, I recently found this 1999 paper, which claims that Cox’s result actually requires some stronger assumptions. I am curious what people here think of this. Has there been subsequent work which relaxes the stronger assumptions?