I just finished the CMU OLI Probability & Statistics course, which I started… somewhere back in March or June. I think, overall, it’s a pretty good statistics course. What I like best about it is that it is heavy about quizzes and exercises with real-world datasets, so I learned a bit more about R as well as learning the basics.
It covers from a fairly practical standpoint: data graphing, stuff like means or medians or distributions, the rules of probability, conditional probability, probability trees, Bayes’s theorem, binomials and the normal distribution in particular, confidence intervals, z-tests, t-tests, ANOVA f-tests, the chi-squared test, linear models.
It has some drawbacks, of course: it’s largely NHST-based as one would expect; the Java applets make copy-and-paste impossible on my Linux system which made answering questions a bit annoying; the R code is not really explained so you have to figure things out yourself; there’s a jump in difficulty between the units and the one on basic laws of probability seems weirdly long and interminable and in general, parts of it can be very repetitious (if I never have to specify what is the null hypothesis and what is H_1, it will be too soon) and trivial leading to occasional ‘-_- yeah whatever’ reactions where I get sick of a pedantic question and just click through the possibilities.
But overall I’m pretty glad I did it. I understand much better the tools I was using to analyze my self-experiments and hopefully it’ll be a good base for tackling a Bayesian textbook like Kruschke’s 2010 Doing Bayesian Data Analysis.
I just finished the CMU OLI Probability & Statistics course, which I started… somewhere back in March or June. I think, overall, it’s a pretty good statistics course. What I like best about it is that it is heavy about quizzes and exercises with real-world datasets, so I learned a bit more about R as well as learning the basics.
It covers from a fairly practical standpoint: data graphing, stuff like means or medians or distributions, the rules of probability, conditional probability, probability trees, Bayes’s theorem, binomials and the normal distribution in particular, confidence intervals, z-tests, t-tests, ANOVA f-tests, the chi-squared test, linear models.
It has some drawbacks, of course: it’s largely NHST-based as one would expect; the Java applets make copy-and-paste impossible on my Linux system which made answering questions a bit annoying; the R code is not really explained so you have to figure things out yourself; there’s a jump in difficulty between the units and the one on basic laws of probability seems weirdly long and interminable and in general, parts of it can be very repetitious (if I never have to specify what is the null hypothesis and what is H_1, it will be too soon) and trivial leading to occasional ‘-_- yeah whatever’ reactions where I get sick of a pedantic question and just click through the possibilities.
But overall I’m pretty glad I did it. I understand much better the tools I was using to analyze my self-experiments and hopefully it’ll be a good base for tackling a Bayesian textbook like Kruschke’s 2010 Doing Bayesian Data Analysis.
(Google+ mirror)