Your examples of “highly politicised science” are very one-sided (consider autism-vaccines, GM crops, stem cell research, water floridisation, evolution), which I suppose reinforces your point.
In your set-up, some reference classes correspond to systematic biases, and some to increased/decreased variance: they don’t all change your probability distribution in the same way.
For example: it takes extreme levels of arrogance to conclude, in ignorance, that most scientists are incorrect on the area of their speciality. By this argument, you should place your own estimate of the most probable future course of the climate close to the scientific consensus (and the scientific consensus on global warming is pretty conistent). However, the science has been very politicised, so alternative theories may have been undermined; therefore your estimate can accept a large variance. “I agree with the scientific consensus, but tentatively”, in other words.
Other example: doomseday-ee predictions. Overpopulation, peak oil, overfishing: in each case, the science in the prediction was pretty right. Population went up; there would have been peak oil with the methods of extraction of the time; fish stocks were destroyed. What was wrong was underestimating future progress and the ability of people to adapt to new situations. So scientific doomseday-ee predictions are a reference class with a special status: correct on their merits, incorrect on their supposed consequences.
So you’ve not to figure out which reference classes your issue belongs to, but also the type of that reference class, and its complete effect on your estimates.
I’d say reference class reasoning can be saved, but it’s more of an art than an easily shared rational tool. If used honestly, you can get good information—but it’s easy to abuse, and very unlikely to be convincing to others.
Other example: doomseday-ee predictions. Overpopulation, peak oil, overfishing: in each case, the science in the prediction was pretty right.
This is not what they were about. What they predicted was massive suffering in each case. Overpopulation doomsdayers predicted food and resource shortages, wars for land and water and such; peak oilers predicted total collapse of economy, death of over half of humanity, and such. Other than for their supposedly massive consequences peak oil is as interesting as peak typewriters, that is not at all unless you work in oil/typewriter industry.
By the way false predictions of underlying process were false in all three cases you mention—population growth is sublinear for quite some time, peak oil reliably doesn’t take place on any of predicted dates, and total fish production is increasing via aquaculture—or true in only the most restricted way, far more restricted than what was claimed—population did increase at all, old oil fields are depleting, wild fish production is not increasing—but this is irrelevant—the core of doomsdayer predictions is the doom part, which almost invariably doesn’t happen.
or true in only the most restricted way, far more restricted than what was claimed
That’s exactly my position. Doomesday predictions are combinations of reasonable science and unwaranted conclusions. They’re like the mirror image of homeopathy, which has wild craziness leading to a partially correct conculsion: “take this pill, and you’ll feel better”.
Your examples of “highly politicised science” are very one-sided (consider autism-vaccines, GM crops, stem cell research, water floridisation, evolution), which I suppose reinforces your point.
In your set-up, some reference classes correspond to systematic biases, and some to increased/decreased variance: they don’t all change your probability distribution in the same way.
For example: it takes extreme levels of arrogance to conclude, in ignorance, that most scientists are incorrect on the area of their speciality. By this argument, you should place your own estimate of the most probable future course of the climate close to the scientific consensus (and the scientific consensus on global warming is pretty conistent). However, the science has been very politicised, so alternative theories may have been undermined; therefore your estimate can accept a large variance. “I agree with the scientific consensus, but tentatively”, in other words.
Other example: doomseday-ee predictions. Overpopulation, peak oil, overfishing: in each case, the science in the prediction was pretty right. Population went up; there would have been peak oil with the methods of extraction of the time; fish stocks were destroyed. What was wrong was underestimating future progress and the ability of people to adapt to new situations. So scientific doomseday-ee predictions are a reference class with a special status: correct on their merits, incorrect on their supposed consequences.
So you’ve not to figure out which reference classes your issue belongs to, but also the type of that reference class, and its complete effect on your estimates.
I’d say reference class reasoning can be saved, but it’s more of an art than an easily shared rational tool. If used honestly, you can get good information—but it’s easy to abuse, and very unlikely to be convincing to others.
This is not what they were about. What they predicted was massive suffering in each case. Overpopulation doomsdayers predicted food and resource shortages, wars for land and water and such; peak oilers predicted total collapse of economy, death of over half of humanity, and such. Other than for their supposedly massive consequences peak oil is as interesting as peak typewriters, that is not at all unless you work in oil/typewriter industry.
By the way false predictions of underlying process were false in all three cases you mention—population growth is sublinear for quite some time, peak oil reliably doesn’t take place on any of predicted dates, and total fish production is increasing via aquaculture—or true in only the most restricted way, far more restricted than what was claimed—population did increase at all, old oil fields are depleting, wild fish production is not increasing—but this is irrelevant—the core of doomsdayer predictions is the doom part, which almost invariably doesn’t happen.
That’s exactly my position. Doomesday predictions are combinations of reasonable science and unwaranted conclusions. They’re like the mirror image of homeopathy, which has wild craziness leading to a partially correct conculsion: “take this pill, and you’ll feel better”.