Liked the post. One of the two big questions it’s poking at is ‘how does one judge a hypothesis without researching it?’ To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:
Is evidence presented for H?
Do those supporting H share data for repeatability?
Is H internally inconsistent?
Does H depend on logical fallacies?
(Debatable) Is H mainstream?
I’ll add a few more:
If H is a physical or mathematical hypothesis, try and find a quantitative statement of it. If there isn’t one, watch out: crackpots are sometimes too busy trying to overthrow a consensus to make sure the math actually works.
Suppose some event is already expected to occur as an implication of a well-established theory. If H is meant to be a novel explanation for that event, H not only has to explain the event, it also has to explain why the well-established theory doesn’t actually entail the event.
Application to global warming. To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it’s also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.
Can H’s fans/haters discuss H without injecting their politics? It doesn’t really matter if they sometimes mention their politics around H, but if they can’t resist the temptation to growl about ‘fascists’ or ‘political correctness’ or ‘Marxists’ or whatever every time they discuss H, watch out. (Unless H is a hypothesis about fascism, political correctness or Marxism or whatever, obviously.)
If arguments about H consistently turn into arguments about who should bear the burden of proof, there’s probably too little evidence to prove H either way.
Hypotheses that implicitly assume current trends will continue or accelerate arbitrarily far into the future should be handled with care. (An exercise I like doing occasionally is taking some time series data that someone’s fitted an exponential for and fitting an S-curve instead.)
If H based on a small selection from many available data points, is there a rationale for that selection?
Application to a Ray Kurzweil slide. Low hanging fruit I admit. Anyway, look at this graph of how long it takes for inventions to enter mass use. Kurzweil plots points for only 6 inventions: the telephone, radio, TV, the PC, the cellphone and the Web. I would be interested to see how neat the graph would be if it included the photocopier, the MP3 player, the tape player, the CD player, the internet, the newspaper, the record player, the USB flash drive, the DVD player, the car, the laser, the LED, the VHS player, the camcorder and so on. The endnotes for Kurzweil’s book ‘The Singularity Is Near’ refer to a version of this chart and estimates ‘the current rate of reducing adoption time,’ but doesn’t seem to say why Kurzweil picked the technologies he did.
Looking at the credentials of people discussing H is a quick and dirty rule of thumb, but it’s better than nothing.
Does whoever’s talking about H get the right answer on questions with clearer answers? Someone who thinks vaccines, fluoride in the drinking water and FEMA are all part of the NWO conspiracy is probably a poor judge of whether 9/11 was an inside job.
How sloppily is the case for (or against) H made? (E.g. do a lot of the citations fail to match references? Are there citations or links to evidence in the first place? Is the author calling a trend on a log-linear graph ‘exponential growth’ when it’s clearly not a straight line? Do they misspell words like ‘exponential?’)
Are possible shortcomings in H and/or the evidence for H acknowledged? If someone thinks the case for/against H is open and shut, but I’m really not sure, something isn’t right.
And Daniel Davies helpfully points out that lying (whether in the form of consistent lies about H itself, or H’s supporters/skeptics simply being known liars) can be an informative warning sign.
* The second question being ‘do we have enough people researching obscure hypotheses and if not, how do we fix that?’ I don’t know how to start answering that one yet.
To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it’s also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.
This isn’t the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is “climate sensitivity”—increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculable warming induced directly by CO2 is, in terms of this measure, only 1 degree. Another degree comes from the “water vapor feedback”, and the final degree from all the other feedbacks. But the feedback due to clouds, in particular, still has a lot of uncertainty; enough that, at the lower extreme, it would be a negative feedback that could cancel all the other positive feedbacks and leave the net sensitivity at 1 degree.
The best evidence that the net sensitivity is 3 degrees is the ice age record. The relationship between planetary temperature and CO2 levels there is consistent with that value (and that’s after you take into account the natural outgassing of CO2 from a warming ocean). People have tried to extract this value from the modern temperature record too, but it’s rendered difficult by uncertainties regarding the magnitude of cooling due to aerosols and the rate at which the ocean warms (this factor dominates how rapidly atmospheric temperature approaches the adjusted equilibrium implied by a changed CO2 level).
The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles. It is implied by the ice-age paleo record, and is consistent with the contemporary record, with older and sparser paleo data, and with the independently derived range of possible values for the feedbacks. But the uncertainty regarding cloud feedback is still too great to say that we can retrodict this value, just from a knowledge of atmospheric physics.
The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles.
Agreed. Nonetheless, as best I can calculate, Really Existing Global Warming (the warming that has occurred from the 19th century up to now, rather than that predicted in the medium-term future) is of similar order to what one would get from the raw, feedback-less effect of modern human CO2 emissions.
The additional radiative forcing due to increasing the atmospheric CO2 concentration from C0 to C1 is about 5.4 * log(C1/C0) W/m^2. The preindustrial baseline atmospheric CO2 concentration was about 280 ppm, and now it’s more like 388pm—plugging in C0 = 280 and C1 = 388 gives a radiative forcing gain around 1.8W/m^2 due to more CO2.
Without feedback, climate sensitivity is λ = 0.3 K/(W/m^2) - this is the expected temperature increase for an additional W/m^2 of radiative forcing. Multiplying the 1.8W/m^2 by λ makes an expected temperature increase of 0.54K.
Eyeballing the HADCRUT3 global temperature time series, I estimate a rise in the temperature anomaly from about −0.4K to +0.4K, a gain of 0.8K since 1850. The temperature boost of 0.54K from current CO2 levels takes us most of the way towards that 0.8K increase. The remaining gap would narrow if we included methane and other greenhouse gases also. Admittedly, we won’t have the entire 0.54K temperature boost just yet, because of course it takes time for temperatures to approach equilibrium, but I wouldn’t expect that to take very long because the feedbackless boost is relatively small.
This might actually be a nice exercise in choosing between hypotheses. Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here. You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the “transient response” phase for the additional perturbation they impose…
Now you’ve handed me a quantitative model I’m going to indulge my curiosity :-)
You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the “transient response” phase for the additional perturbation they impose...
I think we can account for this by tweaking equation 4.14 on your linked page. Whoever wrote that page solves it for a constant additional forcing, but there’s nothing stopping us rewriting it for a variable forcing:
dT(t)dt=Q(t)−T(t)λCs
where T(t) is now the change in temperature from the starting temperature, Q(t) the additional forcing, and I’ve written the equation in terms of my λ (climate sensitivity) and not theirs (feedback parameter).
Solving for T(t),
T(t)=e−tλCs⎛⎝constant∫etλCsQ(t)Csdt⎞⎠
If we disregard pre-1850 CO2 forcing and take the year 1850 as t = 0, we can drop the free constant. Next we need to invent a Q(t) to represent CO2 forcing, based on CO2 concentration records. I spliced together twoAntarctic records to get estimates of annual CO2 concentration from 1850 to 2007. A quartic is a good approximation for the concentration:
The zero year is 1850. Dividing the quartic by 280 gives the ratio of CO2 at time t to preindustrial CO2. Take the log of that and multiply by 5.35 to get the forcing due to CO2, giving Q(t):
Plug that into the T(t) formula and we can plot T(t) as a function of years after 1850:
The upper green line is a replication of the calculation I did in my last post—it’s the temperature rise needed to reach equilibrium for the CO2 level at time t, which doesn’t account for the time lag needed to reach equilibrium. For t = 160 (the year 2010), the green line suggests a temperature increase of 0.54K as before. The lower red line is T(t): the temperature rise due to the Q(t) forcing, according to the thermal inertia model. At t = 160, the red line has increased by only 0.46K; in this no-feedback model, holding CO2 emissions constant at today’s level would leave 0.08K of warming in the pipeline.
So in this model the time lag causes T(t) to be only 0.46K, instead of the 0.54K expected at equilibrium. Still, that’s 85% of the full equilibrium warming, and the better part of the 0.8K increase; this seems to be evidence for my guess that we wouldn’t have to wait very long to get close to the new equilibrium temperature.
Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here.
If I knew that little, I guess I’d put roughly equal priors on each hypothesis, so the likelihoods would be the main driver of my decision. But to run this toy model, should I pretend the only variable forcing I know of is anthropogenic CO2? I’m going to here, because we’re assuming I don’t have ‘detailed atmospheric physics knowledge,’ and also because I haven’t run the numbers for other variable forcings.
To decide which sensitivity is more likely, I’ll calculate which value of λ produces a 0.8K increase from CO2 emissions by 2010 with this model and the above Q(t); then I’ll see if that λ is closer to the ‘3 degrees’ sensitivity (λ between 0.8 and 0.9) or the ‘1 degree’ sensitivity (λ = 0.3). For an 0.8K increase, λ = 0.646, so I’d choose the higher sensitivity, which has a λ closer to 0.646.
Liked the post. One of the two big questions it’s poking at is ‘how does one judge a hypothesis without researching it?’ To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:
Is evidence presented for H?
Do those supporting H share data for repeatability?
Is H internally inconsistent?
Does H depend on logical fallacies?
(Debatable) Is H mainstream?
I’ll add a few more:
If H is a physical or mathematical hypothesis, try and find a quantitative statement of it. If there isn’t one, watch out: crackpots are sometimes too busy trying to overthrow a consensus to make sure the math actually works.
Suppose some event is already expected to occur as an implication of a well-established theory. If H is meant to be a novel explanation for that event, H not only has to explain the event, it also has to explain why the well-established theory doesn’t actually entail the event.
Application to global warming. To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it’s also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.
Can H’s fans/haters discuss H without injecting their politics? It doesn’t really matter if they sometimes mention their politics around H, but if they can’t resist the temptation to growl about ‘fascists’ or ‘political correctness’ or ‘Marxists’ or whatever every time they discuss H, watch out. (Unless H is a hypothesis about fascism, political correctness or Marxism or whatever, obviously.)
If arguments about H consistently turn into arguments about who should bear the burden of proof, there’s probably too little evidence to prove H either way.
Hypotheses that implicitly assume current trends will continue or accelerate arbitrarily far into the future should be handled with care. (An exercise I like doing occasionally is taking some time series data that someone’s fitted an exponential for and fitting an S-curve instead.)
If H based on a small selection from many available data points, is there a rationale for that selection?
Application to a Ray Kurzweil slide. Low hanging fruit I admit. Anyway, look at this graph of how long it takes for inventions to enter mass use. Kurzweil plots points for only 6 inventions: the telephone, radio, TV, the PC, the cellphone and the Web. I would be interested to see how neat the graph would be if it included the photocopier, the MP3 player, the tape player, the CD player, the internet, the newspaper, the record player, the USB flash drive, the DVD player, the car, the laser, the LED, the VHS player, the camcorder and so on. The endnotes for Kurzweil’s book ‘The Singularity Is Near’ refer to a version of this chart and estimates ‘the current rate of reducing adoption time,’ but doesn’t seem to say why Kurzweil picked the technologies he did.
Looking at the credentials of people discussing H is a quick and dirty rule of thumb, but it’s better than nothing.
Does whoever’s talking about H get the right answer on questions with clearer answers? Someone who thinks vaccines, fluoride in the drinking water and FEMA are all part of the NWO conspiracy is probably a poor judge of whether 9/11 was an inside job.
How sloppily is the case for (or against) H made? (E.g. do a lot of the citations fail to match references? Are there citations or links to evidence in the first place? Is the author calling a trend on a log-linear graph ‘exponential growth’ when it’s clearly not a straight line? Do they misspell words like ‘exponential?’)
Are possible shortcomings in H and/or the evidence for H acknowledged? If someone thinks the case for/against H is open and shut, but I’m really not sure, something isn’t right.
And Daniel Davies helpfully points out that lying (whether in the form of consistent lies about H itself, or H’s supporters/skeptics simply being known liars) can be an informative warning sign.
* The second question being ‘do we have enough people researching obscure hypotheses and if not, how do we fix that?’ I don’t know how to start answering that one yet.
This isn’t the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is “climate sensitivity”—increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculable warming induced directly by CO2 is, in terms of this measure, only 1 degree. Another degree comes from the “water vapor feedback”, and the final degree from all the other feedbacks. But the feedback due to clouds, in particular, still has a lot of uncertainty; enough that, at the lower extreme, it would be a negative feedback that could cancel all the other positive feedbacks and leave the net sensitivity at 1 degree.
The best evidence that the net sensitivity is 3 degrees is the ice age record. The relationship between planetary temperature and CO2 levels there is consistent with that value (and that’s after you take into account the natural outgassing of CO2 from a warming ocean). People have tried to extract this value from the modern temperature record too, but it’s rendered difficult by uncertainties regarding the magnitude of cooling due to aerosols and the rate at which the ocean warms (this factor dominates how rapidly atmospheric temperature approaches the adjusted equilibrium implied by a changed CO2 level).
The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles. It is implied by the ice-age paleo record, and is consistent with the contemporary record, with older and sparser paleo data, and with the independently derived range of possible values for the feedbacks. But the uncertainty regarding cloud feedback is still too great to say that we can retrodict this value, just from a knowledge of atmospheric physics.
Agreed. Nonetheless, as best I can calculate, Really Existing Global Warming (the warming that has occurred from the 19th century up to now, rather than that predicted in the medium-term future) is of similar order to what one would get from the raw, feedback-less effect of modern human CO2 emissions.
The additional radiative forcing due to increasing the atmospheric CO2 concentration from C0 to C1 is about 5.4 * log(C1/C0) W/m^2. The preindustrial baseline atmospheric CO2 concentration was about 280 ppm, and now it’s more like 388pm—plugging in C0 = 280 and C1 = 388 gives a radiative forcing gain around 1.8W/m^2 due to more CO2.
Without feedback, climate sensitivity is λ = 0.3 K/(W/m^2) - this is the expected temperature increase for an additional W/m^2 of radiative forcing. Multiplying the 1.8W/m^2 by λ makes an expected temperature increase of 0.54K.
Eyeballing the HADCRUT3 global temperature time series, I estimate a rise in the temperature anomaly from about −0.4K to +0.4K, a gain of 0.8K since 1850. The temperature boost of 0.54K from current CO2 levels takes us most of the way towards that 0.8K increase. The remaining gap would narrow if we included methane and other greenhouse gases also. Admittedly, we won’t have the entire 0.54K temperature boost just yet, because of course it takes time for temperatures to approach equilibrium, but I wouldn’t expect that to take very long because the feedbackless boost is relatively small.
This might actually be a nice exercise in choosing between hypotheses. Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here. You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the “transient response” phase for the additional perturbation they impose…
Now you’ve handed me a quantitative model I’m going to indulge my curiosity :-)
I think we can account for this by tweaking equation 4.14 on your linked page. Whoever wrote that page solves it for a constant additional forcing, but there’s nothing stopping us rewriting it for a variable forcing:
dT(t)dt=Q(t)−T(t)λCs
where T(t) is now the change in temperature from the starting temperature, Q(t) the additional forcing, and I’ve written the equation in terms of my λ (climate sensitivity) and not theirs (feedback parameter).
Solving for T(t),
T(t)=e−tλCs⎛⎝constant∫etλCsQ(t)Csdt⎞⎠
If we disregard pre-1850 CO2 forcing and take the year 1850 as t = 0, we can drop the free constant. Next we need to invent a Q(t) to represent CO2 forcing, based on CO2 concentration records. I spliced together two Antarctic records to get estimates of annual CO2 concentration from 1850 to 2007. A quartic is a good approximation for the concentration:
The zero year is 1850. Dividing the quartic by 280 gives the ratio of CO2 at time t to preindustrial CO2. Take the log of that and multiply by 5.35 to get the forcing due to CO2, giving Q(t):
Plug that into the T(t) formula and we can plot T(t) as a function of years after 1850:
The upper green line is a replication of the calculation I did in my last post—it’s the temperature rise needed to reach equilibrium for the CO2 level at time t, which doesn’t account for the time lag needed to reach equilibrium. For t = 160 (the year 2010), the green line suggests a temperature increase of 0.54K as before. The lower red line is T(t): the temperature rise due to the Q(t) forcing, according to the thermal inertia model. At t = 160, the red line has increased by only 0.46K; in this no-feedback model, holding CO2 emissions constant at today’s level would leave 0.08K of warming in the pipeline.
So in this model the time lag causes T(t) to be only 0.46K, instead of the 0.54K expected at equilibrium. Still, that’s 85% of the full equilibrium warming, and the better part of the 0.8K increase; this seems to be evidence for my guess that we wouldn’t have to wait very long to get close to the new equilibrium temperature.
If I knew that little, I guess I’d put roughly equal priors on each hypothesis, so the likelihoods would be the main driver of my decision. But to run this toy model, should I pretend the only variable forcing I know of is anthropogenic CO2? I’m going to here, because we’re assuming I don’t have ‘detailed atmospheric physics knowledge,’ and also because I haven’t run the numbers for other variable forcings.
To decide which sensitivity is more likely, I’ll calculate which value of λ produces a 0.8K increase from CO2 emissions by 2010 with this model and the above Q(t); then I’ll see if that λ is closer to the ‘3 degrees’ sensitivity (λ between 0.8 and 0.9) or the ‘1 degree’ sensitivity (λ = 0.3). For an 0.8K increase, λ = 0.646, so I’d choose the higher sensitivity, which has a λ closer to 0.646.