I don’t know as much math as I should, but I often have occasion to wish that more programmers and software engineers knew about things like probability densities and calibration, because that would reduce some crucial inferential distances. I rarely wish more programmers knew more calculus.
I often have occasion to wish that more programmers and software engineers knew about things like probability densities...I rarely wish more programmers knew more calculus.
Does anyone else notice the contradiction here? This is a perfect illustration of my point: a “probability density” is a function whose integral gives you the probability. In fact, not only is the very definition of the object logically dependent on calculus, but understanding why the object exists requires knowledge of measure theory (specifically the Radon-Nikodym theorem).
...which, come to think of it, is not surprising given that measure theory is required to define “probability” in the first place!
Yes, mathematical education is extremely screwed up. But the usual complaints and controversies don’t even begin to get to the real issue, which is that people can go through education in mathematics without appreciating the power of abstraction or understanding the need for the ideas in their head to form a coherent logical structure.
Ironically (in view of the parent comment), the solution is probably to teach computer programming! I don’t know anything about programming myself, but my impression is that this is exactly the kind of thing one needs to “get” in order to be a good programmer.
I meant that at, perhaps, a more basic level. Taboo “calculus”.
If you ask a programmer for an “estimate” (most often, of the time a given task will take) everyone thinks it natural to give you a single number.
That’s what I did along with everybody else. It came as a shock to me the first time I stumbled across the idea of expressing estimates with degrees of confidence: that is, my 50% confidence estimate should be such that half of the time I’d be early and half of the time I’d be late. (To many a programmer, the notion of finishing early compared to an estimate is counter-intuitive in its own right.)
Still later I hit upon (more accurately, got it hammered into my head) the idea that you could represent an estimate most accurately as a distribution of frequencies of finishing a similar task in time t. Taking t as a continuous parameter you could represent the estimate as a curve.
And still later I figured out that you could interpret the same curve as a probability density for a single task.
This series of insights unlocked for me a great deal of clarity about why project planning so often went wrong, and practical ideas for doing something about it.
In all this time I don’t think I learned anything about calculus that I didn’t know before. I had and still have no idea what a Radon-Nikodym theorem is (but I’ll check it out), and only the vaguest notion of what measure theory is.
I’ve rarely had occasion to wish that more programmers could recall offhand, say, what the integral of e^x is, or how to apply the product or chain rules, or how to get a Taylor series expansion.
the need for the ideas in their head to form a coherent logical structure
That’s actually what turned me off math years ago. As a computer programmer I’m most comfortable with the style where you first define something before you use it, and build gradually toward higher levels of abstraction. In school this was often seriously compromised in favor of “memorize this and never mind how to construct it in the first place”.
I don’t know as much math as I should, but I often have occasion to wish that more programmers and software engineers knew about things like probability densities and calibration, because that would reduce some crucial inferential distances. I rarely wish more programmers knew more calculus.
Does anyone else notice the contradiction here? This is a perfect illustration of my point: a “probability density” is a function whose integral gives you the probability. In fact, not only is the very definition of the object logically dependent on calculus, but understanding why the object exists requires knowledge of measure theory (specifically the Radon-Nikodym theorem).
...which, come to think of it, is not surprising given that measure theory is required to define “probability” in the first place!
Yes, mathematical education is extremely screwed up. But the usual complaints and controversies don’t even begin to get to the real issue, which is that people can go through education in mathematics without appreciating the power of abstraction or understanding the need for the ideas in their head to form a coherent logical structure.
Ironically (in view of the parent comment), the solution is probably to teach computer programming! I don’t know anything about programming myself, but my impression is that this is exactly the kind of thing one needs to “get” in order to be a good programmer.
I meant that at, perhaps, a more basic level. Taboo “calculus”.
If you ask a programmer for an “estimate” (most often, of the time a given task will take) everyone thinks it natural to give you a single number.
That’s what I did along with everybody else. It came as a shock to me the first time I stumbled across the idea of expressing estimates with degrees of confidence: that is, my 50% confidence estimate should be such that half of the time I’d be early and half of the time I’d be late. (To many a programmer, the notion of finishing early compared to an estimate is counter-intuitive in its own right.)
Still later I hit upon (more accurately, got it hammered into my head) the idea that you could represent an estimate most accurately as a distribution of frequencies of finishing a similar task in time t. Taking t as a continuous parameter you could represent the estimate as a curve.
And still later I figured out that you could interpret the same curve as a probability density for a single task.
This series of insights unlocked for me a great deal of clarity about why project planning so often went wrong, and practical ideas for doing something about it.
In all this time I don’t think I learned anything about calculus that I didn’t know before. I had and still have no idea what a Radon-Nikodym theorem is (but I’ll check it out), and only the vaguest notion of what measure theory is.
I’ve rarely had occasion to wish that more programmers could recall offhand, say, what the integral of e^x is, or how to apply the product or chain rules, or how to get a Taylor series expansion.
That’s actually what turned me off math years ago. As a computer programmer I’m most comfortable with the style where you first define something before you use it, and build gradually toward higher levels of abstraction. In school this was often seriously compromised in favor of “memorize this and never mind how to construct it in the first place”.