back to top
Tuesday, 12 November, 2024
HomeResearchTop medical journals are failing — and both they and authors are...

Top medical journals are failing — and both they and authors are full of excuses

JournalsIt’s a well-known problem with clinical trials: Researchers start out saying they will look for a particular outcome – heart attacks, for example – but then report something else when they publish their results. That practice can make a drug or treatment look like it’s safer or more effective than it actually is.

Now, Science Mag reports, a systematic effort to find out whether major journals are complying with their own pledge to ensure that outcomes are reported correctly has found many are falling down on the job – and both journals and authors are full of excuses.

When journals and researchers were asked to correct studies, the report says the responses “were fascinating, and alarming. Editors and researchers routinely misunderstand what correct trial reporting looks like,” says project leader Ben Goldacre, an author and physician at the University of Oxford and a proponent of transparency in drug research.

The report says starting four years ago, his team’s Centre for Evidence-Based Medicine Outcome Monitoring Project (COMPare) project examined all trials published over 6 weeks in five journals: Annals of Internal Medicine, The BMJ, JAMA, The Lancet, and The New England Journal of Medicine (NEJM). The study topics ranged from the health effects of drinking alcohol for diabetics to a comparison of two kidney cancer drugs.

The report says all five journals have endorsed long-established Consolidated Standards of Reporting Trials (CONSORT) guidelines. One CONSORT rule is that authors should describe the outcomes they plan to study before a trial starts and stick to that list when they publish the trial.

But only nine of 67 trials published in the five journals reported outcomes correctly, the COMPare team reported on 14 February. One-fourth didn’t correctly report the primary outcome they set out to measure and 45% didn’t properly report all secondary outcomes; others added new outcomes. (This varied by journal: only 44% of trials in Annals correctly reported the primary outcome, compared with 96% of NEJM trials.)

When the COMPare team wrote the journals about the problematic papers, only 23 of the 58 letters were published. Annals and The BMJ published all of them, The Lancet accepted 80%, and NEJM and JAMA rejected them all.

NEJM editors explained that their editors and peer reviewers decide which outcomes will be reported. While some of the CONSORT rules are “useful,” they wrote, authors aren’t required to comply. Other editors didn’t seem to understand that trial researchers can switch outcomes if they disclose the change. JAMA and NEJM said they didn’t always have space to publish all outcomes.

The report says the COMPare team found in a companion paper when trial authors responded to the letters that did make it into print, their comments were full of “inaccurate or problematic statements and misunderstandings.” Like editors, many authors misunderstood the CONSORT rules, as well as the role of public registries for sharing a trial’s plan.

Some attacked the COMPare project as “outside the research community.” Others brushed off the criticisms, grumbling about how difficult their work was. Still others denied that they left out any outcomes, the authors state.

The COMPare team writes that it hopes journals will be inspired to better enforce CONSORT and revisit their standards for publishing letters. “We hope that editors will respond positively, constructively and thoughtfully to our findings,” Goldacre says.

Abstract
Background: Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it.
Methods: We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted.
Results: Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines.
Conclusions: All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Authors
Ben Goldacre, Henry Drysdale, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley, Cicely Marston, Anna Powell-Smith, Carl Heneghan, Kamal R Mahtani

[link url="https://www.sciencemag.org/news/2019/02/major-medical-journals-don-t-follow-their-own-rules-reporting-results-clinical-trials"]Science Mag report[/link]
[link url="https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3173-2"]Trials abstract[/link]

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.