Someone on Reddit linked to this preprint paper arguing that the other moments of the secondary infection curve (variance, skewness, kurtosis) can overwhelm the mean (i.e., the R0) in predicting the number of people ultimately infected. With a high variance, right-skewed, high kurtosis curve (loosely, with relatively few “super-infectors” bringing up the average), there are more chances for the outbreak to stochastically die out before those super-infectors get their chance to keep things going. The authors conclude that “higher moments of the distribution of secondary cases can lead a disease with a lower R0 to more easily invade a population and to reach a larger final outbreak size than a disease with a higher R0. ” I’m not positioned to evaluate all of their arguments, but their reasoning based on the models they provided made sense as far as I could tell, using some assumptions that seemed fairly reasonable to this layperson.
The practical consequence of this is that effective contact tracing in the early stages of an outbreak (before too many so-called “community spread” cases) would provide invaluable epidemiological data.
Specifically, this is known as a hubness effect (when the distribution of the number of times an item is one of the k nearest neighbors of other items becomes increasingly right skewed as the number of dimensions increases) and (with certain assumptions) should be related to the phenomenon of these being closer to the centroid.