Confounded No Longer: Insights from ‘All of Statistics’

Using fancy tools like neural nets, boosting and support vector machines without understanding basic statistics is like doing brain surgery before knowing how to use a bandaid.
Larry Wasserman

Foreword

For some reason, statistics always seemed somewhat disjoint from the rest of math, more akin to a bunch of tools than a rigorous, carefully-constructed framework. I am here to atone for my foolishness.

This academic term started with a jolt—I quickly realized that I was missing quite a few prerequisites for the Bayesian Statistics course in which I had enrolled, and that good ol’ AP Stats wasn’t gonna cut it. I threw myself at All of Statistics, doing a good number of exercises, dissolving confusion wherever I could find it, and making sure I could turn each concept around and make sense of it from multiple perspectives.

I then went even further, challenging myself during the bits of downtime throughout my day to do things like explain variance from first principles, starting from the sample space, walking through random variables and expectation—without help.

All of Statistics

1: Introduction

2: Probability

In which sample spaces are formalized.

3: Random Variables

In which random variables are detailed and a multitude of distributions are introduced.

Conjugate Variables

Consider that a random variable is a function . For random variables , we can then produce conjugate random variables , with

4: Expectation

Evidence Preservation

is conservation of expected evidence (thanks to Alex Mennen for making this connection explicit).

Marginal Variance

Why does marginal variance have two terms? Shouldn’t the expected conditional variance be sufficient?

This literally plagued my dreams.

Proof (of the variance; I cannot prove it plagued my dreams):

The middle term is eliminated as the expectations cancel out after repeated applications of conservation of expected evidence. Another way to look at the last two terms is the sum of the expected sample variance and the variance of the expectation.

Bessel’s Correction

When calculating variance from observations , you might think to write

where is the sample mean. However, this systematically underestimates the actual sample variance, as the sample mean is itself often biased (as demonstrated above). The corrected sample variance is thus

See Wikipedia.

5: Inequalities

6: Convergence

In which the author provides instrumentally-useful convergence results; namely, the law of large numbers and the central limit theorem.

Equality of Continuous Variables

For continuous random variables , we have , which is surprising. In fact, for , as well!

The continuity is the culprit. Since the cumulative density functions are continuous, the limit of the density allotted to any given point is 0. Read more here.

Types of Convergence

Let be a sequence of random variables, and let be another random variable. Let denote the CDF of , and let denote the CDF of .

In Probability

converges to in probability, written , if, for every , as .

Random variables are functions , assigning a number to each possible outcome in the sample space . Considering this fact, two random variables converge in probability when their assigned values are “far apart” (greater than ) with probability 0 in the limit.

See here.

In Distribution

converges to in distribution, written , if at all for which is continuous.

Fairly straightforward.

A similar geometric intuition:

Note: the continuity requirement is important. Imagine we distribute points uniformly on ; we see that . However, is 0 when , but . Thus CDF convergence does not occur at .

In Quadratic Mean

converges to in quadratic mean, written , if as .

The expectation of the quadratic mean approaches 0; in contrast to convergence in probability, dealing with expectation means that values of highly deviant with respect to come into play. For example, if but the extremal values of increase in squared distance more quickly than they decrease in probability, will not converge to in quadratic mean.

7: Models, Statistical Inference and Learning

In which the attentive reader notices the chapter’s tautological title—“statistical inference” and “learning” are taken to mean the same thing. Estimators are introduced, along with the definition of bias, consistency, and mean squared error.

8: Estimating the CDF and Statistical Functionals

In which the empirical distribution function and plug-in estimators set the stage for...

9: The Bootstrap

In which we learn to better approximate statistics via simulation.

10: Parametric Inference

In which we explore those models residing in finite-dimensional parameter space.

Fisher Information

The score function captures how the log-likelihood changes with respect to :

Informally, this is the sensitivity of to the parameter . The derivative of the score captures the curvature of with respect to ; essentially, this represents how much information provides about . The Fisher information is then the expected knowledge gain:

Further reading.

Factorization Theorem

A statistic is sufficient there are functions and such that .

A statistic is sufficient if and only if we can reexpress the probability density function using just that statistic.

11: Hypothesis Testing and p-values

In which we make testable predictions and step towards traditional rationality. Trigger warning: frequentism.

Frequently Confused

Brian Fantana: They’ve done studies, you know. 60% of the time it works, every time.
Ron Burgundy: That doesn’t make sense.
Anchorman

Confidence intervals (“in 60% of experiments just like this, we will see results within this interval”) and credible intervals (“we believe that this experiment has a result within this interval with 60% probability”) are different things.

Frequentists define “confidence interval” to mean “theoretically, if we ran this experiment Lots of times, we’d get values in the interval 60% of the time”. Without understanding this nuance, some results seem counterintuitive:

In the example [Jaynes] gives, there is enough information in the sample to be certain that the true value of the parameter lies nowhere in a properly constructed 90% confidence interval!

[Size Joke Here]

In hypothesis testing, we’re trying to discriminate between two sets of possible worlds—formally, we’re partitioning our hypothesis space into (the null hypothesis) and (the alternative hypothesis). Let’s consider all of the things which can happen, all of the outcomes we can observe—this is the sample space .

A test might take a sample and say “you’re in ” (for example). We can divvy up into the acceptance region (in which we accept the null hypothesis) and rejection region .

The power of a test is the function that tells us the probability of rejecting the null hypothesis given some parameter: . Basically, we have probability of rejecting the null hypothesis given that reality is actually parametrized by .

We want to avoid rejecting the null hypothesis when ; therefore, we define some level of significance for which . This means we’re avoiding Type I errors of the time. The maximum probability that we commit a Type I error is the size of the test : .

The p-value Alignment Problem

Getting your understanding of p-values to align with how p-values actually work (whatever that means) can require an impressive amount of mental gymnastics. Let’s see if we can do better.

You’re running an experiment in which you hypothesize that all dogs spontaneously combust when you whistle just so. You divide the hypothesis space into and ( and for short); that is, sets of worlds in which your conjecture is false (null) and true (alternative). Each is a way-the-world-could-be. By the definition of p-values, you may only reject the null hypothesis if all worlds agree that the observation is unlikely.

The p-value is the probability (under the null hypothesis) of observing a value of the test statistic as or more extreme than what was actually observed.

Imagine if you could only Bayes update towards a set of worlds when all the other world models agree that the observation is unlikely under their models.

12: Bayesian Inference

In which we return to the familiar.

Jeffreys’ Prior

We often desire that our priors be noninformative, since finding a reasonable subjective prior isn’t always feasible. One might think to use a uniform prior ; however, this doesn’t quite hold up.

Say I have a uniform prior for the money in your bank account (each being a dollar amount). What if I want to know my prior for square of the amount of money in your bank account ()? Then by the change of variable equation for PDFs, we have . We then desire that our prior be transformation invariant—under a noninformative prior, I should be ignorant about both the value of your balance and the squared value of your balance.

Jeffrey’s prior satisfies this desideratum—define

where is the Fisher information (discussed in the Ch. 10 summary):

Jeffrey’s prior isn’t totally noninformative—it encodes the information that we expect the prior to be transformation invariant, but that is rather weak information.

13: Statistical Decision Theory

In which decision theory is defined as the theory of comparing statistical procedures.

14: Linear Regression

In which the pieces start to line up.

The Bias-Variance Tradeoff

Image credit: Scott Fortmann-Roe

As more covariates are added to a model, the bias decreases while the variance increases. Let’s say you call 30 friends and ask them whether they agree with the Copenhagen interpretation of quantum mechanics, or with many-worlds. Say that you build a model with 5 covariates (such as age, sex, race, political leaning, and education level). This has decreased bias compared to a model which uses only education level, since descriptive power increases with the number of covariates. However, you increase variance in the sense that any given friend is more likely to be differently classified every time you run the experiment with slightly different data sets.

If you’re familiar with brain surgery (machine learning), we can use it to learn how to apply bandaids. Think of adding more covariates as sliding towards overfitting.

Read more.

Degrees of Confusion

There are numerous explanations for what degrees of freedom actually are. Some say it’s the number of independent parameters required by a model, and others explain it as the number of parameters which are free to vary. Is there a better framing?

Consider , and let be the sample mean. Then the residuals vector has degrees of freedom. Why is this the case, and what does this mean?

Say we learn the values of . Then conditional on our already knowing the sample mean, there is only one value that can take:

is totally determined by the first values (this is related to Bessel’s correction).

Let’s ask a similar question—how many bits of information do we need to specify our model? Statistics isn’t acclimated to thinking in terms of bits, so “independent real-valued parameters” is the unit used instead. If you have more parameters, you need to gather more bits to have the same confidence that your explanation (model) fits the data you have observed. This is an implicit Occamian prior: amongst models which fit the data equally well, the one with the fewest degrees of freedom is preferred.

I’d like to thank TheMajor for letting me steal their wonderful explanation.

15: Multivariate Models

16: Inference about Independence

17: Undirected Graphs and Conditional Independence

In which (very) elementary graph theory and the pairwise and global Markov conditions are introduced.

18: Log-Linear Models

19: Causal Inference

Simpson’s Paradox

Sometimes you have two groups which individually exhibit a positive trend, but have a negative trend when combined.

Imagine it is 2019, and Shrek 5 has just come out. Being an internet phenomenon, the movie is initially extremely popular with younger demographics, but has middling performance with middle-aged people. Consider concessions sales at a single theater: the younger group buys, on average, 1.8 large popcorns per person, while the older group only averages .7 larges. If of the initial viewership at the theater is younger, then we have a weighted average of larges.

The older group actually likes the movie, and recommends it to their friends. The demographic decomposition is now fifty-fifty. During the second week, everyone is a bit hungrier and buys .1 more large popcorns per viewing on average. Then both groups are buying more popcorn, but the weighted average decreased: larges.

Obviously, the demographic split shifted the average. However, pretend you’re the manager for the concessions stand. You monitor average per-person purchases and erroneously conclude that something you did made people less likely to buy, even though both groups are buying more popcorn.

If you don’t control for confounders (in this case, demographics), the statistic of per-person purchases is not reliable for drawing conclusions.

20: Directed Graphs

In which passive and active conditioning are built up to by exploring the capacities of directed acyclic graphs for representing independence relations.

21: Nonparametric Curve Estimation

22: Smoothing Using Orthogonal Functions

The top plot is the true density for the Bart Simpson distribution.

23: Classification

24: Stochastic Processes

In which we learn processes for dealing with sequences of dependent random variables.

25: Simulation Methods

Final Verdict

This text is very cleanly written and has reasonable exercises. Ideally, I would have gone through my calculus books first, but it wasn’t a big deal. The main downside is that I couldn’t find an answer key, but thanks to the generous help of my friends on Facebook and in the MIRIx Discord, it worked out.

I skimmed Ch. 21, as it seemed to be more about implementation than deep conceptual material. I intend to revisit Ch. 22 after reading Tao’s Analysis I, which is next on my list.

This book took me less than two weeks at a few hours of studying per day.

Forwards

Tips

I quickly realized that learning the basics of the R programming language is essential for getting a large portion of the value this text can offer.

Depth

Although I have fewer things to say on a meta level, I definitely got a lot out of this book. The most rewarding parts were when I noticed my confusion and really dove in to figure out what was going on—in particular, my forays into random variables, confidence intervals, p-values, and convergence types.

Red

I definitely haven’t arrived at full-fledged statistical sophistication, but I progressed so rapidly that I regularly thought “what caveman asked that lol” when encountering questions I had asked just days earlier.

This is another data point for a realization I’ve had over the last month: I’m so red, but I’ve been living like a white-blue. What does that even mean, and how is it relevant?

From Duncan’s excellent fake framework, How the “Magic: The Gathering” Color Wheel Explains Humanity:

The most salient dichotomy present here, in my opinion, is that of red and white:

Red and white disagree on questions of structure and commitment. Red is episodic, suspicious of rules and order because they constrain one’s ability to grow and change and freely choose. White is more diachronic, interested in finding the small compromises and sacrifices that will allow people to build trust and cooperate reliably.

White personalities often regard themselves as a continuous person, evolving in a somewhat orderly fashion. Red, on the other hand, feels disconnected from their past selves. After a certain amount of time, past-you feels like a different person who made choices that now seem ridiculous, if not alien. How old is your current iteration? Mine is three months, but what shocked me about this book was that I felt an intellectual disconnect with the me who existed four days prior.

Zooming out from All of Statistics, I think it’s telling that I achieved fairly tectonic change by learning to align my emotions with my reflectively-coherent desires, to clear away emotional debris, and to channel my passion into discrete tasks. I was living as if I were a white, but it’s now clear I’m a blue-red who exhibits white traits mostly in pursuit of peace of mind.

I no longer ask “how can I study most effectively?”, but rather, “what does it feel like to be me right now, and how can I bring that into alignment with what I want to do?”.

Red seeks freedom, and it tries to achieve that freedom through action… For a red agent, victory feels fiery, beautiful, magnificent, and fierce — it’s the climax of a dance or a brawl or a love affair, the feeling of cresting a summit or having successfully ridden a wave. It’s feeling alive.

If you are interested in working with me or others on the task of learning MIRI-relevant math, if you have a burning desire to knock the alignment problem down a peg—I would be more than happy to work with you. Messaging me may also have the pleasant side effect of your receiving an invitation to the MIRIx Discord server.

Although any shape in the sequence implied by the image does indeed have strictly different area than the circle it approximates (in contrast to and ), the analogy may still be helpful.

Please don’t wirehead thinking about this.

I’m aware that this section isn’t very implementable. I may write more on my post-CFAR experience in the near future.