Wednesday, 8 May, 2024
HomeResearch IssuesScientists sound alarm on badly run medical studies

Scientists sound alarm on badly run medical studies

A new research paper has signalled a crisis in medical research, with researchers saying “more than 60% of trials are so methodologically flawed we cannot believe their results”.

They estimate that 88% of trial spending is wasted, reports GroundUp.

Dodgy research design and bad statistical methodology mean that most randomised trials waste time, money and effort, and are of no or dubious scientific value, say Stefania Pirosca, Frances Shiely, Mike Clarke and Shaun Treweek, in the journal Trials.

Their paper examined 1,659 randomised trials, involving 400,000 participants, that took place between May 2020 and April 2021 in 84 countries, as well as 193 multinational trials.

The majority of trials (62%) showed a high risk of bias. More than half of trial participants were in these high risk of bias trials. Trials where the risk of bias was unclear accounted for 30% of those reviewed, while trials with a low risk of bias – those that can be trusted – accounted for just 8% of the total.

Bad trials – ones where we have little confidence in the results – are not just common, they represent the majority of trials in all countries and across most clinical areas. For instance, all trials looking at drugs and alcohol exhibited a high risk of bias. The most reliable field was anaesthesia, with 60% of trials exhibiting a low risk of bias.

The research team drew trial data from 96 reviews from 49 of the 53 clinical Cochrane Review Groups. Cochrane is an international organisation that helps to gather and propagate the results of medical research to better guide medical decision-making. This is done by experts compiling and evaluating research trials and results in “standardised, high-quality systematic reviews”.

Bad science was common everywhere. “No patient or member of the public should be in a bad trial and ethical committees, like funders, have a duty to stop this happening,” the paper’s authors write.

South Africa was bad, but Spain and Germany may be worse

In the seven trials reviewed that took place in South Africa, four had a high risk of bias, two had an unclear bias risk, and one trial was “good science”. This share of bad science is roughly similar to those found in the clinical trials done in the UK and USA. The most reliable health research science was done in multinational trials: with these, 23% of trials were a low risk of bias. (The authors didn’t identify the trials.)

The least reliable science, in countries that conducted 20 or more RCTs, was done in Spain and Germany, with 86% and 83% of the trials exhibiting a high risk of bias.

While results from just one year were interrogated, the paper’s authors found that their results map to similar studies, and that bad science can be expected to be the norm, over time.

This amounts to a massive waste of money and effort.

Statisticians and research method experts have been sounding the alarm on biased research for years, since Doug Altman’s 1994 paper in the British Medical Journal, “The scandal of poor medical research".

Doctors want to know if they can rely on a particular treatment to produce a desired outcome, and need research that confers a degree of confidence. One way to do that – the most popular – is randomised control trials.

Randomised control trials, also known as randomised trials, or RCTs, are for many (though not all), the gold standard for achieving scientific knowledge about a medical intervention, whether a drug or another type of therapy. The way RCTs are conducted is crucial, as it is adherence to the method that gives people relying on the research confidence that the results produce scientific knowledge.

But, if there is a high risk that the results were biased by errors in how they were conducted and how results were achieved, they should not be relied on. Pirosca and colleagues did not examine the type (or domain) of bias in the studies, arguing that having a high risk in one type of bias is sufficient to undermine the trial’s results.

In short, for Pirosca and colleagues, health research in randomised trials is bad when there is an identifiable risk of bias in how the results were obtained.

The large number of high risk of bias trials appears to be due to “a lack of input from methodologists and statisticians at the trial planning stage combined with insufficient knowledge of research methods among the trial teams”. You would not, they say by analogy, think it appropriate that a statistician conducts surgery, just because they are doing work in a surgical domain.

Bad science during COVID

Recent medical scandals in the headlines have highlighted the risks of bad science in medicine. The pandemic has brought a boom in medical research, and attention to medical research results. This environment has produced some remarkable science, but it has also created scientific fiascos, like the one surrounding Ivermectin.

As GroundUp has previously reported, a review of studies investigating Ivermectin as a possible therapy for COVID initially suggested that the deworming drug led to better outcomes in those using it. On the face of it, the small studies that supported this conclusion seemed to provide promise for a low-cost, lifesaving COVID intervention. But once the methodology and statistics were scrutinised, many of these papers were deemed unscientific – for instance, patients were excluded from analysis for no good reason. And once these trials were excluded from the review, the drug’s promise as a treatment vanished.

Medical research watchdog Retraction Watch currently lists 12 papers purporting to investigate Ivermectin that were subsequently withdrawn or for which concerns have been expressed. According to their records, 235 COVID papers have been withdrawn to date.

But the crisis is not insurmountable. Pirosca and colleagues say that relatively simple fixes would dramatically reduce the amount of untrustworthy health research – by ensuring that methodological principles that underlie RCTs are not compromised.

More expenditure on statistical expertise will save money

A 2015 review examined 142 trials exhibiting a high risk of bias. The authors found that in half of the trials, the methodological adjustments required to reduce the risk of bias would have been low or zero cost. Easy adjustments at the design stage would have made important improvements to 42% of them.

Pirosca and colleagues propose that no medical RCT should be funded or given ethical approval if it cannot prove that the team conducting the trial has a member with methodological and statistical expertise. Every RCT should, in its design, use risk of bias tools to ensure results are not compromised.

The expertise that could restore the worth to medical research is in short supply.

More methodologists and statisticians are needed, and money should be invested in training people with this expertise, and investing in applied methodology research and supporting infrastructure. The authors call for 10% of a funder’s budget.

This might seem like a lot of money, but, argue Pirosca and co, it would be a fraction of the cost of the wasted research in the year under review, estimated to be billions of rands.

The task is urgent: “Randomised trials have the potential to improve health and well-being, change lives for the better and support economies through healthier populations … Society will only see the potential benefits of randomised trials if these studies are good, and, at the moment, most are not.”

Study details

Tolerating bad health research: the continuing scandal

Stefania Pirosca, Frances Shiely, Mike Clarke & Shaun Treweek

Published in Trials on 2 June 2022

Abstract

Background
At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper ‘The scandal of poor medical research’ was that he used the word ‘poor’ rather than ‘bad’. But how much research is bad? And what would improve things?

Main text
We focus on randomised trials and look at scale, participants and cost. We randomly selected up to two quantitative intervention reviews published by all clinical Cochrane Review Groups between May 2020 and April 2021. Data including the risk of bias, number of participants, intervention type and country were extracted for all trials included in selected reviews. High risk of bias trials was classed as bad. The cost of high risk of bias trials was estimated using published estimates of trial cost per participant.
We identified 96 reviews authored by 546 reviewers from 49 clinical Cochrane Review Groups that included 1659 trials done in 84 countries. Of the 1640 trials providing risk of bias information, 1013 (62%) were high risk of bias (bad), 494 (30%) unclear and 133 (8%) low risk of bias. Bad trials were spread across all clinical areas and all countries. Well over 220,000 participants (or 56% of all participants) were in bad trials. The low estimate of the cost of bad trials was £726 million; our high estimate was over £8 billion.
We have five recommendations: trials should be neither funded (1) nor given ethical approval (2) unless they have a statistician and methodologist; trialists should use a risk of bias tool at design (3); more statisticians and methodologists should be trained and supported (4); there should be more funding into applied methodology research and infrastructure (5).

Conclusions
Most randomised trials are bad and most trial participants will be in one. The research community has tolerated this for decades. This has to stop: we need to put rigour and methodology where it belongs — at the centre of our science.

 

GroundUp article – Scientists sound alarm on badly run medical studies (Republished under Creative Commons Licence)

 

Trials Journal abstract – Tolerating bad health research: the continuing scandal (Open access)

 

BMJ article – Doug Altman: The scandal of poor medical research (Open access)

 

See more from MedicalBrief archives:

 

Ivermectin meta-analyses highlighted, again, the dangers of fake data

 

Ivermectin: Further claims of ‘serious errors or potential fraud’ in studies

 

The high costs to a medical watchdog of challenging bad science

 

Withdrawn: Mexican Ivermectin paper claiming reduced COVID hospitalisation

 

Facebook censors The BMJ: When gatekeepers go rogue

 

Intermittent fasting: From Pythagoras to Fung

 

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.