Ivermectin and outcomes from Covid-19 pneumonia: A systematic review and meta-analysis of randomized clinical trial studies comes to the conclusion: “Our study suggests that ivermectin may offer beneficial effects towards Covid-19 outcomes. More randomized clinical trial studies are still needed to confirm the results of our study.”
On the other hand Ivermectin for the treatment of COVID-19: A systematic review and meta-analysis of randomized controlled trials comes to the conclusion: “In comparison to SOC or placebo, IVM did not reduce all-cause mortality, length of stay or viral clearance in RCTs in COVID-19 patients with mostly mild disease. IVM did not have an effect on AEs or severe AEs. IVM is not a viable option to treat COVID-19 patients.”
What did the studies do differently to come to their conclusions? How do I go about interpreting which of them provides the better analysis?
I had a quick look and essentially it seems the latter found fewer studies (10 vs 19) and therefore fewer patients (1173 vs 2768)*
They have similar central estimates for RR of all cause mortality (0.37 vs 0.31) but due to having more patients the former has tighter CI (0.15 to 0.62) and concludes that there is an effect but the latter has wider CI (0.12 to 1.13) and concludes that there isn’t an effect.
The latter could claim that there is as yet insufficient evidence of an effect based on the studies in their analysis but not that these isn’t an effect. I especially take issue with the claim that “IVM is not a viable option for treating COVID-19 patients” when they themselves take such pains to talk about how low quality much of the evidence is!
The two meta-analyses also differ on their ratings of different papers—for instance the largest study (n=400, Lopez-Medina et al.) is rated as High quality (7 out of 7) in the former but at high risk of bias in the latter (due to deviations from intended interventions).
Scanning the paper there are a few issues. For the most part the problems are mitigated but there could still be issues:
There was a period of 17 days where the placebo group were receiving ivermectin(!)
The people from these 17 days were excluded from the primary analysis and additional subjects were enrolled
The primary outcome was changed during the study
This occurred ~30% through the study.
The reasoning for changing from the old outcome seems reasonable
It’s hard to comment on whether the selection of the new outcome could have seen bias
Placebo was changed during the study from dextrose water to something tasting more like ivermectin
This was ~25% through the study
Results did not differ much between the 2 placebo groups
I don’t feel like this would make a massive difference but I’m not sure
This paper is fairly typical of the quality of the studies (according to meta-analysis 2) or on the top end of study quality (according to meta-analysis 1) which causes me some concern.
In conclusion, if I was offered Ivermectin I would take it at this point (side effects seem to be small) and might even look to sign up to a trial if I had COVID—in the UK some people would be eligible for this one.
* 6 studies were common to both analyses.
The Medina study received some methodological complains, see the JAMA letter.
Ivermectin proponents seem to consistently push for a regimen of:
high dosage (0.2mg/kg once-a-week for prevention)
early usage, ideally as prevention
usage with/after meals
If they’re right one can imagine studies that see no effects either because of low dosage, late administration or administering it on empty stomach (the anti-parasite regimen), which the Medina study does.
One thing the negative meta-study noted was the variation in doses between studies (12 − 210mg).
It seems like one of the reasons they found fewer studies was that they only searched till March 22, 2021 which is odd for a review paper published 28 June 2021.
The pro-Ivermectin was published earlier and went till May 10th.
Yes, I think this definitely had some effect (I misread 10th May as 10th March originally!).
I also think the fundamental search must have been better in #1. Of the 13 present in #1 but missing in #2 6 were from 2020, 7 from 2021 (I haven’t looked at specific dates for the 2021 ones but I’d guess some of them are from before March 22nd).
For #1 the original search turned up 1237 articles then 95 full text analysed leading to the 19 included. Corresponding numbers for #2 are 256, 12 & 10.
Some general comments about medical research. Source: I have studied the statistics books in detail, and have read several cubic meters of medical papers and learned most of the lessons the hard way.
When reading medical papers look for
1. Funding sources for the study or for the authors of the study (e.g. “speaking fees” and “consulting fees”). He who pays the piper calls the tune.
2. Statistical incompetence, which is rife in medical research. For example, you routinely see “lack of statistical significance” interpreted as “proof of no effect”.
3. Pre publication of the study design, end points and intended statistical analysis. There is a lot of scope to move the goalposts and engage in p-hacking and other nefarious activities.
4. Differences between the abstract and the text. Often you can read the abstract and wonder if it refers to the same paper .
5. In meta-analyses look for whether the selection criteria were adhered to or not or whether subjective criteria were used to exclude inconvenient studies.
6. Financial interests. For example it is notable that countries like India, that make generic drugs, appear to be more favourable to generic drugs. Meanwhile in the US, there seems to be a strong bias in favour of drugs in patent.
7. Read the methods section very carefully. Once you have read enough papers this will become instinctive.
8. Be ready for the vast majority of papers to be of low quality and worthless.
9. I routinely see studies rigged to deliver a predetermined outcome. For example, if you want to find a non-statistically significant effect which can be misrepresented as “no effect”, then run a small study, for a short period, and use suboptimal doses or take other measures to minimize differences between the groups compared.
This all sounds rather grim, an extreme case of the hype and uneven quality that probably afflicts all research areas now… Number 8 seems especially grim, even though it doesn’t involve outright corruption, since it means that any counter-institution trying to do quality control will be overwhelmed by the sheer quantity of papers… Nonetheless: What you describe is a way to check the quality of an individual paper. Is there any kind of resource that reliably turns up high-quality papers? Perhaps literature reviews or citation counts?
No you just have to filter. In any particular field you get to know the agendas and limitations of many of the researchers. X is a shill for company Y, A pushes the limits for p hacking, B has a fixed mindset about low fat diets. etc. Some researchers also tend to produce me-too and derivative papers, others are more innovative.
Also you do get quicker at spotting the fatal flaw.
In finance there are blogs that pick out recent good papers; these are a huge time saver (e.g. Alpha Architect which I have mentioned before).
The obvious difference is that the second does not include Elgazzar, while the first includes Elgazzar, which is bad for the first one because Elgazzar faked its data so incompetently it has been retracted: https://grftr.news/why-was-a-major-study-on-ivermectin-for-covid-19-just-retracted/ https://gidmk.medium.com/is-ivermectin-for-covid-19-based-on-fraudulent-research-5cc079278602 https://www.theguardian.com/science/2021/jul/16/huge-study-supporting-ivermectin-as-covid-treatment-withdrawn-over-ethical-concerns
I have not managed to see Hariyanto et al reproduced yet (any help welcome), so I don’t know what effect removing Elgazzar from it would have on that specific meta-study.
For Bryant et al though this is the result with both Elgazzar’s in:
This is the result with both Elgazzar’s out:
RRmoved, but the result is fundamentally the same.
Do you think it would change the result for Hariyanto et al?
Another meta-analysis (Bryant et al) has a very similar title but positive claims Ivermectin for Prevention and Treatment of COVID-19 Infection: A Systematic Review, Meta-analysis, and Trial Sequential Analysis to Inform Clinical Guidelines.
The authors have put out an official rebuttal of the negative meta-analysis which is an interesting read and point to many of their perceived flaws.
The comments on the preprint of the negative study (Roman et al) are also interesting.
My current impression is that the negative study is not very high quality at the moment, for any reason among rush to publish, incompetence or malice.
For sake of argument I still have to look at what studies Roman et al did include that was omitted by Bryant et al and Hariyanto et al as that would reveal any pro-ivm biases.
Well worth reading the linked material—quite damning.
A recent preprint compares Roman et al and Bryant et al: Bayesian Meta Analysis of Ivermectin Effectiveness in Treating Covid-19 Disease
The two studies find similar
RR(risk reduction as RR=riskivermectinriskcontrol)
RR = 0.38 [CI 95%: (0.19, 0.73)]
RR = 0.37 [CI 95%: (0.12, 1.13)]
Roman et al should conclude there’s not enough evidence because they can’t rule out RR >= 1 at 95% confidence. Instead they conclude:
Bryant and Roman use similar methods, the difference in the confidence interval is because they picked different studies.
Bryant has different estimates for mild vs severe vs all cases. 0.38 is for all-cases to allow comparison with Roman batched all-cases together and has no breakdowns.
This third Bayesian (meta-?)meta-analysis concludes:
The negative meta-study is borderline malicious.
“This article has an embarrassing history whereby treatment arms in the study of Niaee were reversed, attracting protest from Dr Niaee himself. This egregious error has been corrected in the revised version, but with no change to the Conclusions in spite of dramatic change…”—from BIRDGroup twitter.
Pubpeer is also useful in cases like this:
I googled a bit to see whether ivermectin can be ordered online and while there are website that superficially look like normal online pharmacies selling it, those seem to be lacking an impressum and seem pretty shady.
The pharmacies that I found that sell it and aren’t shady all online give it out for prescriptions.
Because it’s political. Some people are invested in Ivermectin being effective, other people are invested in it not being effective. The extant studies are all inconclusive due to a small N, and mostly have problems with their methodology; if you pick and choose your studies in the right way you can get whatever result you want.
And the individual studies are often extremely bad. I note Cadegiani et al, who claim that Ivermectin (and also Hydroxychloroquine, and also Nitazoxanide) are each so effective, either individually or combined (they didn’t bother to track which patients got which drugs) that it is unethical to use a placebo group in studying those drugs. I’m not sure how Elsevier can be affiliated with a journal that publishes material like that and retain any credibility.
Pretending that just because something is political you can believe whatever you want is hugely problematic.
It’s interesting to what malpractice the contra-ivermectin study engages. Not withdrawing it from publication after they mistated the results of a key study (and not giving it to any peer-reviewer competent enough to notice the error) seems to me a lot more ethically problematic as allowing a low quality study to be published where all empiric claims seem to be true.
Neither of the meta-analyses includes this. Given that you think it’s one of the studies that you think is problematic this demostrates that the pro-Ivermectin studies didn’t just cite any available low quality study.
How do you think you should updates upon learning that the pro-Ivermectin study didn’t chose studies to maximize the ivermectin effect?
My longer-form thoughts are at Substack.
By reading them?
It seems they hit different studies and one can check that. One also says that everything is low quality of evidence and other says everything is very applicable to be analysed.
It is also a bit funny how one of papers goes study by study “IVM reduced mortaliy but QoE was low” and then goes on to conclude that overall “IVM does not reduce mortality”
The two statement are not neccesarily so much in conflict, they are just weaseled in opposite directions. One of the them says “suggests” and other says “is not proven” which you get if you have a faint trace going one way.
I think the question of whether ivermectin works is important enough that I don’t want to completely rely on the impression that I personally get by reading them.
The question is both important on the policy level and also on the personal level in case any rationalist does get infected with COVID-19.
In the abstract they make a definitive statement that IM is not useful. This goes well past any rational or reasonable interpretation of the evidence. This raises the question of bias / motivated reasoning. I will read the paper in full today and may comment further.
I read the negative paper (I had already read the positive one).
The positive one concludes, rightly I think, that there is evidence falling short of proof that IM is likely to be useful.
I am not at all happy with the negative paper.
1. Lots of highly emotive language against IM suggesting a lack of objectivity. Another thing suggesting lack of objectivity is that they put <did not find IM useful> in their list of strengths. I wonder who would find this a strength and why? Also sneering about studies done in low income countries did not endear them to me.
2. They really went all out, above and beyond the call of duty, trying to exclude papers. Again it did not seem like they were humbly and objectively seeking the truth. It seemed to reek of motivation. Having reduced the papers that qualified to a tiny number, then surprise surprise the result is N.S. Which they can then misrepresent (see next point).
3. Misstatement of the conclusion. Lack of statistical significance does not mean you showed the thing doesn’t work, especially given P=13% and RR=0.37. Given the small numbers the reduction in deaths would have had to have been enormous (~80%) to achieve significance.
4. I could not find a design of the study, published before they started. This is a concern, because they excluded studies of prophylaxis (prevention of infection), which is reportedly the strong point of IM. No convincing explanation was given for why they did this. Ironically they criticise other studies for not having prepublished designs.
5. It was interesting that every study that they quoted showed a large reduction in deaths. And they found fault with just about every one of them. Their own study showed a 63% reduction in deaths also, but was N.S.
I too would probably take IM if I had CV (or even was exposed) and could get access. ATM it seems likely it would be helpful and the downside seems low. IM has been in use for decades and billions of people—many with poor nutrition and otherwise vulnerable—have taken it. So it is not a great unknown in terms of side-effects.
Certainly this study does not show IM does not work, but it will be quoted as though it does. There have been studies of vitamin D and CV that are also poorly conducted and seemingly rigged to produce a N.S. result. E.g. you give vitamin D when people are late in the disease, knowing full well that it takes a couple of weeks for it to be metabolised into the fully active form.
The number I have in memory is that it takes roughly a week. If you think it’s longer, can you point me to a resource?
Prophylaxis is a strong point given it’s potential effect but given that other studies found that currently the evidence for treatment effects is higher then the evidence for prophylaxis, focusing on the issue that’s more studied seems reasonable to me I consider the other points more concerning.
At the moment that raises the question to me whether it makes sense to order Ivermectin from India (likely takes a month to arrive).
Given that Delta is enough to produce r>1 in the UK in summer while people are more outside and the UK has still a lot of restrictions while having 85% with one vaccination dose and 50% fully vaccinated, Delta Plus already having a mutation that makes it likely better at evade vaccines, a new wave in autumn seems very likely to me.