The ivermectin COVID-19 scandal shows how vulnerable science is to fraud

0


[ad_1]

Haruko Obokata published two papers in January 2014 describing how normal blood cells can be converted into pluripotent stem cells.

Back then, this was a coup – it dramatically simplified a previously complicated process and opened up new perspectives in medical and biological research, while cleverly bypassing the bioethical considerations of using human embryos to obtain stem cells.

In addition, the process was straightforward and involved applying a weak acid solution or mechanical pressure – oddly similar to how you remove a rust stain from a knife.

Within a few days, scientists found that some of the images in the newspaper were irregular. And a wider skepticism began. Can it really be that simple?

Since the experiments were simple and the biologists were curious, attempts to replicate the papers’ results immediately began. They failed. By February, Obokata’s institute had opened an investigation. In March, some of the paper’s co-authors disavowed the methods. The papers were withdrawn in July.

While the papers were clearly unreliable, there was no clarity as to the core of the problem. Did the authors mislabel a sample? Did you discover a method that once worked but was inherently unreliable?

Did they just make up the data? It took years longer, but the scientific community got a rough answer when other related work by Obokata was withdrawn due to image manipulation, data irregularities, and other problematic issues.

The whole episode was an excellent example of how science corrects itself. An important finding was published, challenged, tested, investigated and found to be defective … and then withdrawn.

So we could hope that the organized skepticism process would always work. But it doesn’t.

In the vast majority of scientific work, it is incredibly rare for other scientists to even notice irregularities, let alone bring the global forces of empiricism to do something about it. The underlying assumption of academic peer review is that fraud is so rare or unimportant that it is not worthy of a dedicated detection mechanism.

Most scholars assume that they will never come across a single case of fraud in their career, and so even the thought of reviewing calculations in evaluable papers, re-doing analysis, or verifying that experimental protocols have been properly applied becomes unnecessary deems.

Worse still, the accompanying raw data and analytical code often required for forensic analysis of a paper are not routinely published, and performing such a rigorous review is often viewed as a hostile act, the kind of dragging that only takes the profoundly Reserved for motivated people is disrespectful from birth.

Everyone is busy with their own work, so what kind of Grinch would go so extreme as to invalidate someone else’s?

That brings us straight to ivermectin, an anti-parasitic drug that was being tested to treat COVID-19 after laboratory studies in early 2020 showed it was potentially beneficial.

It gained massive popularity after a published and then withdrawn analysis by the Surgisphere group showed a tremendous decrease in death rates in people who take it, sparking a massive wave of use of the drug around the world.

More recently, evidence of ivermectin’s effectiveness has been largely based on a single study preprinted (i.e., published without peer review) in November 2020.

This study, which was carried out by a large patient cohort and reported a strong treatment effect, was popular: read over 100,000 times, cited by dozen of scientific papers, and included in at least two meta-analytical models that showed that ivermectin, as the authors claimed, was a ” Miracle cure “for COVID-19.

It’s no exaggeration to say that this one piece of paper inspired thousands, if not millions, of people to get ivermectin for the treatment and / or prevention of COVID-19.

A few days ago, the study was withdrawn due to allegations of fraud and plagiarism. A Masters student hired to read the paper as part of their undergraduate studies noted that the entire introduction appeared to have been copied from previous academic papers, and further analysis revealed that the study data sheet posted online by the authors had obvious irregularities contained.

It is hard to overestimate how monumental a failure this is for the scientific community. We proud Guardians of Knowledge accepted a research paper at face value that was so gutted that it took a medical student only a few hours to completely dismantle it.

The seriousness attributed to the results was in direct contrast to the quality of the study. The authors reported bogus statistical testing at multiple points, extremely implausible standard deviations, and a truly overwhelming positive efficacy – the last time the medical community found a drug to be “90 percent useful” for a disease was the use of antiretroviral drugs for the treatment of AIDS sufferers.

Still, nobody noticed. For nearly a year, reputable, respected researchers included this study in their reviews, doctors used it as evidence of the treatment of their patients, and governments recognized its conclusions in public health policies.

Nobody spent the 5 minutes it took to download the data file the authors uploaded online, noting that numerous deaths were reported before the study even started. Nobody has copied and pasted phrases from the introduction to Google and you can tell how much of it is the same as previously published articles.

This inattention and inaction kept the saga going – if we continue to be deliberately disinterested in the problem, we will not know how much scientific fraud there is or where it can be easily located or identified, and so will not make solid plans to address or identify to improve its effects.

A current editorial in the British Medical Journal argues that it might be time to change our basic perspective on health research and assume that health research is fraudulent until proven otherwise.

That is, not to assume that all researchers are dishonest, but to begin obtaining new information in health research from a categorically different basic level of skepticism as opposed to blind trust.

This may sound extreme, but if the alternative is to accept that on occasion millions of people will receive drugs based on untested research and later be withdrawn entirely, it can actually be a very small price to pay.

James Heathers is the CSO of Cipher Skin and an Integrity Researcher.

Gideon Meyerowitz-Katz is an epidemiologist working on chronic diseases in Sydney, Australia. He writes a regular health blog about science communication, public health, and what the new study you read about actually means.

The opinions expressed in this article do not necessarily reflect the views of the editors of ScienceAlert.

[ad_2]

Leave A Reply

Your email address will not be published.