Your arguments about health care systems collapsing in the absence of lockdowns are highly emotive but not factual. The experience from Sweden, from Serbia, from the US states which didn’t do second lockdowns, is that the health care systems didn’t collapse. You are arguing from the unexamined assumption that no lockdown means failed health care system and your assumption is empirically, provably false. Please update your views accordingly.
I see a lot of nit-picking of my evidence, but you have provided zero support for your own claim that lockdowns do more good than harm. I challenge you to come up with a published cost-benefit analysis that proves the same.
What would a good cost-benefit analysis include? There are a lot of harms caused by lockdowns. Some of them are difficult to quantify (eg my last point), but I think it’s reasonable to demand a cost-benefit analysis takes into account at least three of the following six harms (which are far from an exhaustive list):
Increased poverty is directly correlated to lower life expectancy, so we should measure the lost years of life from increased poverty. This is a very long-term effect, which will be doing harm for years to come.
Unemployment and financial problems are both ‘scarring’ (it takes a long time to dig yourself out) and both cause mental health effects. We should include the long-term mental health cost of increased unemployment and individual financial problems.
Where lockdowns include school closures, the long-term effect on children’s development and socialisation is extreme. We should include estimates of the lifetime impact on children, which will include shorter life expectancy. (Yes, shorter educations are correlated to future life expectancy.)
Additional mental health effects directly attributable to the lockdown including elevated rates of depression, stress and anxiety. Proper acknowledgement that some of these effects (eg increased alcoholism) have long-term effects which should be accounted for.
Lockdown sucks for everyone, even if still employed, not in school, and not suffering a formal mental health condition. We should acknowledge that this is a widespread disutility and deserves to be considered.
Lockdowns have set a deeply disturbing precedent that governments can remove almost all civil liberties when they declare there’s an emergency. This directly harms democracy and raises the risk of future loss of freedoms. (Yes this one is the hardest to quantify, but it’s important and I would like to see more people acknowledge it.)
An ideal cost-benefit analysis would acknowledge that benefits from lockdowns in terms of lives saved are uncertain and include a range of estimates, but if you find one that properly considers at least 3 of the above 6 points, I’ll accept it even if it has a point estimate for benefits. (Those using the initial Imperial College models should be at the top end of the range because the Imperial College figures are too high for all the reasons I’ve already said.) However, since you’ve said that “It’s a strawman that policymakers compare lockdown to “do nothing.”″ then I do expect your superior cost benefit analysis will compare lockdown to more reasonable restrictions, rather than a do nothing option.
So there it is. I challenge you: bring evidence or go home.
PS: If I seem to be beating the drum of long-term effects too hard, it’s because I’m still angry at the UK government’s (belated, poor-quality) excuse for a cost-benefit analysis which, among its many other failures, looked only at harm done over the next five years.
Lockdowns are more harmful than beneficial with the few exceptions of those countries like New Zealand that successfully kept the virus out. For any country where the virus is already endemic, the damage done by lockdowns was immense, and the benefits relatively limited. Remember that the counterfactual is not ‘do nothing’. It’s ‘enact some more reasonable set of restrictions’.
Prof Douglas Allen of SFU just did a really good takedown of bad arguments in favour of lockdown. In his most unrealistic extreme scenario intended to steelman the pro-lockdown case, he finds that lockdowns cost 3.6x more than their benefits, at the opposite end of the spectrum, they might have cost as much as 282x more than their benefits. (His figures for Canada, but the argument should generalise to any developed country.)
This image is a good example of how distorted pro-lockdown arguments are. It’s taken from Neil Ferguson’s Imperial College model used to argue for lockdown. Pro-lockdown cost-benefit analyses generally compare the blue line below (full lockdown) with the black line (do nothing) for an estimate of 120 lives saved per 100,000 people. It would be far more reasonable to compare that blue line to the brown line below (assumes case isolation and household quarantine but not lockdown measures) which immediately halves your assumed benefit for lockdown measures. Then remember that the Imperial College model is grossly overstated and assumes no public behaviour change in the absence of a lockdown. You get the picture.
I really do encourage you to read the whole study: http://www.sfu.ca/~allen/LockdownReport.pdf
To expand on your last sentence, anger can be a driver of positive change in the world. Greta Thunberg is angry that people are carelessly wrecking the only planet we have to live on. Racial justice protesters in the US are angry that black people keep getting killed by the police. Unless you’re a saint, being furious about some injustice is much more motivating than the dispassionate thought that ‘x would be a good deed’.
Having said that, I would agree with OP that most of the time in most interpersonal situations anger is damaging, and for most people becoming less angry is a good thing. (Or at least many people should become much more aware about why they are angry, at whom, instead of letting themselves be generically angry and taking it out on the nearest available target.)
What is your definition of contaminate? If Devanney is correct that low doses of radiation are acceptable—and I believe he is—then much land which is described as ‘contaminated’ is in fact perfectly liveable. (Also see the people who illegally live in the Chernobyl exclusion zone). For a reasonable definition of ’contaminate’ then, it follows that a nuclear accident contaminates much smaller areas of land and is less expensive.
Your anti-nuclear argument also ignores the status quo of non nuclear energy. In America alone, fossil fuels (read coal) kill tens of thousands every year. So if you replaced all coal power with nuclear and had a Chernobyl every year (unrealistic extreme scenario), it would still save lives on net.
That said, I can see the argument that renewables are safer than both today, but OP is absolutely right to analyse the decades-long failure to replace coal with nuclear in the period before we had renewables.
I love the idea, but I’m sceptical based on genetics. Our civilisation has moved a lot of species around, from stuff like bringing placental mammals to Australia to things like exporting food crops around the world. Potatoes evolved in the Americas, now you can find them everywhere. Soy beans came from Japan / East Asia but now they’re heavily cultivated in Brazil.
I assume that any previous industrial civilisation, even if it were less adaptable than humans, would probably have spread outside of its home continent, if only to look for oil and minerals. And they’d end up introducing species all over the place, like we have, and modern day geneticists should be scratching their heads and trying to figure out all sorts of mysteries about what evolved where. But so far as I know (I’m not an evolutionary biologist) we just don‘t have those sort of mysteries where species categories suddenly jump continents.
So, sadly, I don’t think that the Earth has had a previous industrial civilisation at least since Australia separated from the other continents. I wouldn’t rule out previous pre-industrial civilisations, though. In fact, given the wide variety of species today which demonstrate at least some tool use—not just great apes but also capuchin monkeys, corvids, even octopuses—I’d be surprised if no previous species ever got to at least homo erectus level.
I would say the UFO thing is different because the defence people are reporting physical phenomena which they can’t explain. So far as I know, the CIA didn’t have evidence that ESP worked and subsequently decide to investigate it, rather someone persuaded them to spend some money looking for evidence (which they didn’t find). The UFO reports give the impression that the DoD didn’t want to take them seriously but they got smacked in the face by enough evidence that they didn’t have much choice.
Again, I’m not saying it’s definitely something weird. But if there’s a one-third chance the UFO reports are from something interesting, isn’t it worth investigating? Remember that aliens are only one of the interesting possibilities. The other ones are that China/Russia/someone has either made a big leap ahead in technology; or has figured out how to spoof multiple US military systems and is testing their abilities by generating UFO sightings. Or the third option, something we haven’t even thought of.
I think that we should be taking the possibility of UFOs more seriously. Over the last year, I’ve updated from thinking that UFOs are laughable to thinking there’s a 10-20% chance of actual alien visitation, and about another 10-20% of something else important going on. (Ie someone—presumably China—has either made a huge leap in drone technology or is getting good at spoofing multiple US military systems simultaneously.)
Why? Because a number of senior and generally sane people seem to be taking this seriously. The US military forces in particular are seeing a number of cases of unidentified phenomena—not just aerial, also submarine—where they see things that look like craft that have capabilities not currently possible with modern technology. Some of these things like the 2004 USS Nimitz incident have been captured on multiple systems like the ship’s radar, and aircraft cameras and visually spotted by the pilots. The former Direction of National Intelligence has said recently that there are a lot more sightings which haven’t been made public.
Yes, I know there are still other explanations, and the track record suggests sightings will turn out to be some kind of optical illusion or something, but I’m open to the possibility that not every incident is explicable in terrestrial terms.
The link below is a good long-form read which argues that the US Department of Defence is taking the possibility seriously.
Closure of schools. There’s a mountain of evidence that taking kids out of school is harmful. It’s not just the loss of education—although that doesn’t help—but also the loss of socialisation. Less education is directly correlated with shorter life expectancy—a US study found that just that effect was enough to mean that closing schools would cost more years of life than it saved, with 98% probability. That’s before adding in the burden from significantly higher rates of mental health problems in children who have been deprived of school.
Close of schools is disproportionately harmful to those who already come from deprived backgrounds—think about the difference between a middle-class family where every child has their own iPad and the educated parents will help with homework, compared to a lower-class family which has one phone to share among everyone and the single-parent doesn’t have time to help kids and also work. Then consider that closure of schools means loss of free school meal schemes—this caused chaos and serious hardship even in the UK and will have been worse in less developed countries.
Then there’s the extreme cases: school is an escape for kids who live in homes where there’s domestic violence. Teachers can also look for signs that kids need help or are suffering for abuse—if they’re physically present. Closure of schools means that kids in abusive situations are trapped 24⁄7 with their abusers—whose own behaviour may become worse due to stress of unemployment or isolation.
I think your Seeing the Smoke was interesting and the conclusions about human nature are right—particularly the point that most people will fail to do the obvious thing like leave the smoke-filled room out of fear of looking weird. That said, I really wish Cummings had drawn a different conclusion from your blogpost, because I strongly believe that lockdowns were the wrong response to Covid. Specifically, I would prefer Cummings had read Hans Rosling’s excellent book Factfulness, especially chapter 10 on the urgency instinct:
Rosling was investigating a disease in a remote area of Mozambique. The mayor of the nearest city asked Rosling if he should institute a roadblock to prevent sick people from coming to the city. Rosling wasn’t even sure the disease was contagious, but he thought better safe than sorry and said yes, put up the roadblock. When the village women, some of them carrying infants, found the road to market blocked they asked local fishermen to carry them instead. The overloaded boats sank causing the death of about 20 people including the infants. Rosling later found that the disease was caused by eating improperly-prepared cassava, and wasn’t contagious at all, meaning that the roadblock had killed those people for no reason.
Rosling blamed himself for those deaths for years afterwards. As he puts it: “Back in Nacala in 1981, I spent several days carefully investigating the disease but less than a minute of thinking about the consequences of closing the road. Urgency, fear, and a single-minded focus on the risks of the pandemic shut down my ability to think things through. In the rush to do something, I did something terrible.” [My bold.]
I believe that the decision to lockdown followed the exact same error mode as Rosling’s roadblock. Policy-makers got tunnel vision and focused on one thing—the pandemic—and ignored all other consequences including a mental health epidemic, mass unemployment/furlough, serious harm to children’s development, and many other issues. More generally, I suggest that the typical failure mode is to over-react to any threat which is reported in the media (a combination of the availability heuristic making that threat more salient, and asymmetric justice meaning that politicians are more scared of being accused of inaction than they are of wrong action.) The failure mode can be summarised as: we must do something, this is something, therefore we must do it.
Pretty much everyone who has tried to calculate a cost-benefit analysis of lockdowns finds that they cost more years of life than they save. In developed countries, the typical estimate is 10x. (For example, Prof Philip Thomas of Bristol Uni estimating for the UK, or Dr Ari Joffe estimating for Canada.) For developing countries with poorer, younger populations, the ratio is worse. A multi-disciplinary South African team estimated that their lockdown would cost 29x more years of life than it saved.
Those estimates all assume that lockdowns are highly effective at preventing Covid deaths, which was intuitive as of March 2020. However, reality is not always intuitive and the experience of places like Sweden, and those US states which didn’t have winter lockdowns, suggests that the actual benefits of lockdowns are far smaller than thought. Studies like the Stanford University one find small-to-zero benefits of lockdowns vs less-restrictive measures.
For those of us who are more interested in physical truth than in the socially-constructed narrative, we need to start persuading the wider world that lockdowns do more harm than good. I hope you and the rest of our virtual monastery will join me in this effort.
I think the issue is that ‘raising awareness’ is used to mean three separate things. (I agree that the extra simulacra levels aren’t a helpful explanation.) Using awareness of breast cancer as a reasonably non-controversial example.
Give people some useful knowledge or skill to help reduce the problem. Eg teach women how to examine themselves for lumps and when to seek medical advice.
Raise the profile of the issue (or in some cases inform people that the issue exists) such that more resources will be devoted to solving it. Eg publishing opinion pieces informing the public how many people are affected by cancer and calling for more facilities for treatment, or encouraging people to donate money to charities researching cures.
Signal that you are a virtuous person who is concerned about socially-approved causes. Eg wear a pink ribbon, or like Facebook pages from breast cancer charities.
I agree that there are a lot of people practicing virtue-signalling, while kidding themselves that they are doing level 2 profile-raising, and I also agree that a lot of the profile-raising is transparently ineffective. But I think that there are useful level-1 activities which also come under the banner of ‘raising awareness’ and I wouldn’t want to stigmatise those.
There are also some situations in which the level-2 activities are useful. I suspect you would disagree, but I think sexual assault is a fairly good example: a lot of people have gone to great efforts to explain to the general public that there is a widespread problem that needs action. The result has been an in-progress and partial change in social norms which may well succeed in reducing the levels of sexual assault.
I may be stretching the point about changing human psychology here, but:
Education is widely considered to include learning how to be a emotionally well-adjusted and responsible adult. Schools teach things like mindfulness and intra-personal conflict resolution. For example, kids learn how to recognise when they are reacting from anger, and therefore how to take a breath and try a more mature reaction. They learn about concepts like how stress can make you start catastrophising, and how to apply some cognitive behavioural therapy to oneself or a friend to head off mental health problems before they get started. All of this is considered as normal and basic as learning how to read: it is assumed that almost any functional adult will have these life skills.
The consequences for later life are immense and flow through every part of society. A more responsible and well-adjusted population is happier. Workers are more productive (although this may be expressed by the same amount of work being done faster so everyone has more time for the important things in life). Every kind of destructive behaviour is much less likely, whether in mild forms like chronic worrying through to extremes like domestic violence or addiction to hard drugs. People expect political leaders to be sane and reasonable, and will strongly reject those who pander to the worst human tendencies, meaning that most countries are better-run and have wiser policies.
I was thinking about this a little more, and I think that the difference in our perspectives is that you approached the topic from the point of view of individual psychology, while I (perhaps wrongly) interpreted Duncan’s original post as being about group decision-making. From an individual point of view, I get where you’re coming from, and I would agree that many people need to be more confident rather than less.
But applied to group decision-making, I think the situation is very different. I’ll admit I don’t have hard data on this, but from life experience and anecdotes of others, I would support the claim that most groups are too swayed by the apparent confidence of the person presenting a recommendation/pitch/whatever, and therefore that most groups make sub-optimal decisions because of it. (I think this is also why Duncan somewhat elides the difference between individuals who are genuinely over-confident about their beliefs, and individuals who are deliberately projecting overconfidence: from the point of view of the group listening to them, it looks the same.)
Since groups make a very large number of decisions (in business contexts, in NGOs, in academic research, in regulatory contexts...) I think this is a widespread problem and it’s useful to ask ourselves how to reduce the bias toward over-confidence in group decision-making.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: ‘our VAR model says that 99.9% of the time we won’t lose more than X’ therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren’t, chaos ensued. (I remember the risk manager of one bank—Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I’d say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: ‘this model is the best model it can be, I’ve spent lots of time and effort tinkering with it, therefore the model must be right’. Sometimes it’s because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn’t want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn’t lock down to control Covid. I can’t speak to Imperial’s internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education—the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College’s model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it’s my contention—and I’m aware it’s a controversial view—that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it’s an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Imperial model applied to Sweden: https://www.medrxiv.org/content/10.1101/2020.04.11.20062133v1.full.pdf
Covid deaths in Sweden by date: https://www.statista.com/statistics/1105753/cumulative-coronavirus-deaths-in-sweden/
Thank you for an interesting article. It helped clarify some things I’ve been thinking about. The question I’m left with is: how practically can someone encourage a culture to be less rewarding of overconfidence?
I guess I’m feeling this particularly strongly because in the last. year I started a new job in a company much more tolerant of overconfidence than my previous employer. I’ve recalibrated my communications with colleagues to the level that is normal for my new employer, but it makes me uncomfortable (my job is to make investment recommendations, and I feel like I’m not adequately communicating risks to my colleagues, because if I do no-one will take up my recommendations, they’ll buy riskier things which are pitched with greater confidence by other analysts). Other than making sure I’m the least-bad offender consistent with actually being listened to, is there something I can do to shift the culture?
And please, no recommendations on the lines of ‘find another job’, that’s not practical right now.