in general, when experts are dealing with some big unfathomable future, and it’s a complex system, I tend to discount that. The complexity makes it almost impossible to predict.
Also, if they are using a model, I pretty much discount everything I hear. But if they are just looking at data like a scientist and saying, “When this happens, that happens,” then I’m going to put more stock in it.
But you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)
“Just looking at the data like a scientist” does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.
Adams doesn’t elaborate on this point, but I read him as saying, if you’ve actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That’s a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
heart disease kills people
heart disease is correlated with high cholesterol
eggs contain lots of cholesterol
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you’ve jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don’t or, better yet, do a multiyear controlled experiment in which the only diet variation between groups is that some people eat eggs and others don’t, the answers you get are far more likely to be correct.
Here’s another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you’ll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don’t quite match what you expected because there’s an additional resonance you didn’t know about and didn’t include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don’t exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we’re about to do something really expensive and difficult like changing a nation’s dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough—political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
In context, it’s less likely that that’s the case, but I still think this quote is painting with much too wide a brush.
political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
I would argue that it is this political conditioning itself that is the anti-epistemology.
Please, please, kids, stop fighting! Maybe Eugine_Nier & elharo are right about the necessity of looking at the world to decide whether a model’s true, and maybe Manfred & fezziwig have a point about observations and their interpretation not being cleanly separable from the use of models.
Also, if they are using a model, I pretty much discount everything I hear. But if they are just looking at data like a scientist and saying, “When this happens, that happens,” then I’m going to put more stock in it.
What? Scientists do use models. Assuming charitably that he’s not mistaken or bullshitting about what scientists do, by “model” he must mean something different—what?
I’m fairly certain that’s actually horrible advice. It boils down to “substitute your judgement over professionals on those problems that are hardest.”
I think this steelman is not quite true to the spirit of the original. The contrast he draws between “using a model” and “looking at data like a scientist” is especially strange. One wonders what he thinks of meteorology.
Well giving that meteorologists (or at least climate scientists) have for the last couple of decades been predicting warming and sea level rises that never seems to occur, I’d say it’s a good example.
--Scott Adams, Interview with Julia Galef, February 10, 2014
— Paul Krugman, “Sergeant Friday Was Not A Fox”
“Just looking at the data like a scientist” does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.
Adams doesn’t elaborate on this point, but I read him as saying, if you’ve actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That’s a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
heart disease kills people
heart disease is correlated with high cholesterol
eggs contain lots of cholesterol
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you’ve jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don’t or, better yet, do a multiyear controlled experiment in which
the only diet variation between groups is that some people eat eggs and others don’t, the answers you get are far more likely to be correct.
Here’s another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you’ll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don’t quite match what you expected because there’s an additional resonance you didn’t know about and didn’t include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don’t exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we’re about to do something really expensive and difficult like changing a nation’s dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough—political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
In context, it’s less likely that that’s the case, but I still think this quote is painting with much too wide a brush.
I would argue that it is this political conditioning itself that is the anti-epistemology.
I don’t suppose you could contribute substance rather than just accusation?
Please, please, kids, stop fighting! Maybe Eugine_Nier & elharo are right about the necessity of looking at the world to decide whether a model’s true, and maybe Manfred & fezziwig have a point about observations and their interpretation not being cleanly separable from the use of models.
Prediction is going beyond the data, so a model that never goes beyond the data isn’t going to be much use.
Climate change models incorporated data, so they are not purely theoretical like the economic model you mentioned.
I … think he’s talking about basic correlation, statistical analysis, that sort of thing?
(I enjoy Scott’s writing, but I didn’t upvote the grandparent.)
What? Scientists do use models. Assuming charitably that he’s not mistaken or bullshitting about what scientists do, by “model” he must mean something different—what?
I’m fairly certain that’s actually horrible advice. It boils down to “substitute your judgement over professionals on those problems that are hardest.”
More like “discount status on problems where expertise is a poor predictor of accuracy”.
I think this steelman is not quite true to the spirit of the original. The contrast he draws between “using a model” and “looking at data like a scientist” is especially strange. One wonders what he thinks of meteorology.
Well giving that meteorologists (or at least climate scientists) have for the last couple of decades been predicting warming and sea level rises that never seems to occur, I’d say it’s a good example.