Wednesday, 24 April, 2024
HomeResearch IssuesIvermectin meta-analyses highlighted, again, the dangers of fake data

Ivermectin meta-analyses highlighted, again, the dangers of fake data

Some of the data that recently found Ivermectin was effective against COVID-19 came from clinical trials that “almost certainly” did not happen as described. But while it’s hard to stamp out fake data and fraudulent trials, it’s not impossible, writes Lisa Bero in Nature.

Bero, a senior research integrity editor at Cochrane, and chief scientist at the Center for Bioethics and Humanities at the University of Colorado Anschutz Medical Campus, writes:

In mid-2021, a handful of meta-analyses looked at the use of Ivermectin, a drug developed to treat people infected with parasitic worms, against COVID-19. The analyses included data suggesting it was effective, which came from clinical trials that almost certainly did not happen as described. Problems detected in reported trials include copied data, results for patients who died before the trial began, and disputes about whether a trial occurred at all.

A few weeks later, a meta-analysis by Cochrane, an international group specialising in reviewing evidence in medicine and health, found that Ivermectin had not been shown to be effective. That analysis took several steps to exclude fraudulent studies. But confusion and distrust continue.

In my view, most reported clinical trials are conducted properly, but fraudulent studies are still rife, and not confined to COVID-19. An analysis of trial manuscripts submitted to the journal Anaesthesia found that more than 40% probably contained false patient data.

I study bias in the design, conduct and publication of research. I have been part of a years-long initiative to exclude fraudulent studies from Cochrane’s reviews. Detecting fake studies is difficult; so is implementing policies to do so. Solving the problem requires cooperation by publishers, editors, institutions and reviewers.

There is no universal telltale sign of fraud. Data might be fabricated completely or partly. Real data might be represented falsely. (Data collected on 9-year-olds might be reused in a study describing 3-year-olds.) Interventions might not be administered as stated. (Patients described as randomised into treatment groups could have been selected by condition or convenience. Or “treated” patients might not have actually received medicine.) Implausible results might come from egregious misuse of statistics.

All of these practices, even if they are unintentional, produce untrustworthy results. So multiple checks are necessary. Our guide for reviewers at Cochrane includes a range of steps, such as checking registration and consulting databases including PubPeer, a platform for post-publication peer review. Cochrane also asks reviewers to consider the plausibility of claims such as the number of participants studied, given the time and the number of sites and investigators involved. We ask reviewers to look for inconsistencies across the whole article, for overlapping text and for improbable baseline and outcome data.

This is tough work. Some reviewers have suggested that instead, broad classes of study should be excluded, such as those from certain countries or those that have not been prospectively registered. But that’s problematic: global representation is important, and registration is no guarantee that a study was done appropriately. Furthermore, observational studies, common in public health, are often unregistered.

There are also risks of assuming too readily that a study is fraudulent. Corporate interests sometimes use (false) accusations of fraud to discredit legitimate research that puts them in a bad light, as do many ideological factions. Accusations of fraud can damage reputations and put both accused and whistle-blowers at risk of losing their jobs.

Cochrane’s research-integrity team is only two people, working part time — not enough to investigate every possible fraudulent study. That’s why we provide tools to help reviewers detect potential fraud, and templates for asking journals for investigations and retractions. When reviewers find a problem, we advise them to get in touch with authors for more information; if there is no timely, reassuring response, we recommend contacting the journal. If systematic reviewers do not raise alarms, it’s likely that no one will.

Many reviewers feel this is a waste of time. They assume that most journal editors would rather preserve their reputations than look into concerns, and that any investigation launched by an author’s research institution will take years and be biased against finding problems. I understand this. That said, investigations prompted by Cochrane reviewers are under way. In one case, anomalies and unsatisfactory responses from an author group prompted reviewers to check more of the group’s studies, exclude those with early warnings of fraud from reviews and present evidence to editors in a clear, structured way. Journals are investigating.

Reviewers and other “data detectives” do laudable work, but their efforts alone cannot pull a fraudulent trial from the literature. Only a journal can act definitively to retract it.

Facing down fraud requires all involved in the publication pipeline to step up and coordinate. A journal editor has access to the manuscript and data; a publisher can look for problematic patterns across studies (such as repetitive text or other indications of ‘paper mills’); an institution can check for ethical approval, protocols and raw data that show a study was run as planned.

Universities, journals and publishers should implement data checks sooner rather than later. This means sharing information as well as technical resources, such as expertise in statistical and software tools to detect anomalies.

They must also make fraud-detection tasks routine. Too often, investigations focus on pinning down blame, or sweeping misconduct under the rug. Only through a widespread community effort can we ferret out fraud.

 

Nature article – Stamp out fake clinical data by working together (Open access)

 

See more from MedicalBrief archives:

 

Ivermectin: Further claims of ‘serious errors or potential fraud’ in studies

 

Ivermectin papers show limitations of ‘inherently unreliable’ summary data

 

The high costs to a medical watchdog of challenging bad science

 

Lancet removes hyped preprint on efficacy of vitamin D for COVID-19

 

Study finds 'spin' in 26% of research – rising in non-randomised trials

 

PANDA’s ‘misleading and pseudoscientific’ claims drive vaccine hesitancy

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.