More than half of papers in published over five years in six leading psychiatry and psychology journals — including JAMA Psychiatry, the American Journal of Psychiatry and the British Journal of Psychiatry — showed evidence of ‘spin’ to enhance their findings.
Although many believe scientific journals to be some of the most reliable sources of information, they are not immune to the desire to be read and shared. Medical News Today reports that a recent study set out to assess how much “spin” authors used in the abstracts of research papers published in psychology and psychiatry journals. They chose to look at abstracts because they summarise the entire paper, and doctors often use them to help inform medical decisions.
In this study, the authors at the Oklahoma State University Centre for Health Sciences, Tulsa, Oklahoma, outline their definition of spin as follows: “The use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically non-significant difference for the primary outcome, or to distract the reader from statistically non-significant results.”
The authors looked at papers from the top six psychiatry and psychology journals from 2012–2017. Specifically, the researchers focused on randomised controlled trials with “non-significant primary endpoints”. The primary endpoint of a study is the main result of the study, and “non-significant” in this context means that, statistically, the team did not find enough evidence to back up their theory.
Spin comes in many forms, including: selectively reporting outcomes, wherein the authors only mention certain results; P-hacking, wherein researchers run a series of statistical tests but only publish the figures from tests that produce significant results; and inappropriate or misleading use of statistical measures.
In total, they analysed the abstracts of 116 papers. Of these, 56% showed evidence of spin. This included spin in 2% of titles, 21% of the results sections of the abstract, and 49% in the conclusion sections of the abstract. In 15% of the papers, spin was present in both the results and conclusion sections of the abstracts.
The researchers also investigated if industry funding was associated with spin. Perhaps surprisingly, they found no evidence that having financial backing from industry increased the likelihood of spin.
The report says the findings are concerning. Although spin in news media in general is worrying in itself, doctors use research papers to help steer clinical decisions. As the authors write: “Researchers have an ethical obligation to honestly and clearly report the results of their research.” However, in the abstract section, authors can pick and choose the details that they include. The authors of the current study have concerns about what this might mean for doctors: “Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients. Most physicians read only the article abstract the majority of the time.”
The report says although researchers have not investigated the effects of spin in great depth, the authors point to one study that hammers home their point. In it, scientists collected abstracts from the field of cancer research. All were randomised controlled trials with a statistically non-significant primary outcome. All abstracts included spin.
The researchers created second versions of these abstracts in which they removed the spin. They recruited 300 oncologists as participants. The researchers gave half of them an original abstract with spin, and they gave the other half the abstract with no spin. Worryingly, the doctors who read the abstracts with spin rated the intervention covered in the paper as more beneficial.
As the authors of the recent study paper write: “Those who write clinical trial manuscripts know that they have a limited amount of time and space in which to capture the attention of the reader. Positive results are more likely to be published, and many manuscript authors have turned to questionable reporting practices in order to beautify their results.”
Another study, published in 2016, extends the scope of this issue. They investigated how peer reviewers – experienced scientists who scrutinise papers before publication – influenced spin. They found that in 15% of cases, the peer reviewer asked the authors to add spin.
The report says the current study does have some limitations. For example, these findings might not apply to other journals or fields of research. They also note that identifying spin is a subjective endeavour, and although they employed two independent data extractors, there is room for error.
The exact size of the spin issue in medical research remains to be seen, but the authors conclude that “(a)uthors, journal editors, and peer reviewers should continue to be vigilant for spin to reduce the risk of biased reporting of trial results.”
Randomised controlled trials (RCTs) serve as the gold standard in psychiatry. Given the importance of such trials to clinical practice, it is imperative that results be reported objectively.
Researchers are encouraged to conduct studies and report findings according to the highest ethical standards.1 2 This standard means reporting results completely, in accordance with a protocol that outlines primary and secondary endpoints and prespecified subgroups and statistical analyses. However, authors are free to choose how to report or interpret study results. In an abstract, authors may include only the results they want to highlight or the conclusions they wish to draw. These results and conclusions, however, may not accurately summarise the findings of the study. When such a misrepresentation of study results occurs, there is said to be spin. Spin has been defined as, ‘the use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results’.3 Many practices contribute to spin, including the selective reporting of outcomes,4 5 p-hacking,6 7 inappropriate application of statistical measures like relative risk8 and manipulation of figures or graphs.9 10
Spin in abstracts has recently been discussed in a systematic review.11 Evidence suggests that abstract information alone is capable of changing a majority of clinicians’ care decisions.12 For example, when unadjusted analyses or secondary outcomes are given undue attention in abstracts, readers’ overall appraisal of the contents of a manuscript is altered.13 Additionally, a previous systematic review showed there to be a higher rate of favourable conclusions in industry-funded studies compared with other sponsorships.14
We have evaluated the prevalence of spin in abstracts of RCTs with nonsignificant primary endpoints in the psychology and psychiatry literature and have explored the association between spin and industry funding.
Samuel Jellison, Will Roberts, Aaron Bowers, Tyler Combs, Jason Beaman, Cole Wayant, Matt Vassar