Thursday, 18 April, 2024
HomeResearchPandemic urgency sees peer review 'outsourced' to practising doctors and journalists

Pandemic urgency sees peer review 'outsourced' to practising doctors and journalists

The COVID-19 pandemic has seen "droves of research papers rushed to pre-print servers, essentially outsourcing peer review to practising physicians and journalists", write ethicists at Carnegie Mellon and McGill universities.

The global outbreak of coronavirus disease 2019 (COVID-19) has seen a deluge of clinical studies, with hundreds registered on clinicaltrials.gov. But, write Alex John London and Jonathan Kimmelman at the Centre for Ethics and Policy, Carnegie Mellon University, Pittsburgh and the Studies of Translation, Ethics and Medicine (STREAM), Biomedical Ethics Unit, McGill University, Montreal, a palpable sense of urgency and a lingering concern that “in critical situations, large randomised controlled trials are not always feasible or ethical” perpetuate the perception that, when it comes to the rigours of science, crisis situations demand exceptions to high standards for quality.

They write: Early phase studies have been launched before completion of investigations that would normally be required to warrant further development of the intervention, and treatment trials have used research strategies that are easy to implement but unlikely to yield unbiased effect estimates. Numerous trials investigating similar hypotheses risk duplication of effort, and droves of research papers have been rushed to pre-print servers, essentially outsourcing peer review to practising physicians and journalists.

Although crises present major logistical and practical challenges, the moral mission of research remains the same: to reduce uncertainty and enable caregivers, health systems, and policy-makers to better address individual and public health. Rather than generating permission to carry out low-quality investigations, the urgency and scarcity of pandemics heighten the responsibility of key actors in the research enterprise to coordinate their activities to uphold the standards necessary to advance this mission.

Rigorous research practices can't eliminate all uncertainty from medicine, but they represent the most efficient way to clarify the causal relationships clinicians hope to exploit in decisions with momentous consequences for patients and health systems. Nevertheless, fastidious research standards may seem a luxury that pandemics can ill accommodate.

Commenting on a study using sub-optimal design, one group of scientists stated, “Given the urgency of the situation, some limitations…may be acceptable, including the small sample size, use of an unvalidated surrogate end point, and lack of randomisation or blinding”.

The perception that core methodological components of high-quality research are dispensable is underpinned by three problematic assumptions. The first is that some evidence now, even if flawed, seems preferable to expending greater resources on more-demanding studies whose benefits only materialize later. Because the window for learning in pandemics is often short, the need to “balance scientific rigor against speed” seems inevitable.

The problem with this view is that challenges that rigorous methods address do not disappear in the face of urgent need. Small studies that build on basic science and pre-clinical research in early phases of drug development routinely generate signals of promise that are not confirmed in subsequent trials. Even when new drugs are established to be safe and effective, rarely are their benefits so massive that they can be detected in small, open-label, non-randomised trials.

The proliferation of small studies that are not part of an orchestrated trajectory of development is a recipe for generating false leads that threaten to divert already scarce resources toward ineffective practices, slow the uptake of effective interventions because of an inability to reliably detect smaller but clinically meaningful benefits, and engender treatment preferences that make patients and clinicians reluctant to participate in randomised trials. These problems are amplified by published reports of compassionate use, which was designed as an alternative pathway to access interventions outside of research, not to support systematic evaluation.

The second underpinning of research exceptionalism is the view that key features of rigorous research, like randomisation or placebo comparators, conflict with clinicians' care obligations. However, when studies begin in, and are designed to disturb a state of, clinical equipoise (meaning that it's uncertain whether a particular treatment is better than the alternatives), they ensure that no study participant receives a standard of care known to be inferior to any available alternative. Under this condition, randomized trials with appropriate comparators configure medical practice in a way that allows patients to access investigational interventions under conditions designed to eliminate ineffective strategies and exploit effective alternatives.

The third underpinning of research exceptionalism derives from the expectation that researchers and sponsors are generally free to exercise broad discretion over the organisation and design of research. However, that discretion never operates in a vacuum. Even under normal conditions, the goal of research ethics and policy is to use regulations, reporting guidelines, and other social controls to align research conduct with the public interest. Crucially, the information that research produces is a public good on which caregivers, health systems, and policy-makers rely to efficiently discharge important moral responsibilities. As recent international guidelines for ethical research emphasise, the justification for research is its social and scientific value, understood as its ability to produce the information that multiple actors need to make decisions that implicate health, welfare, and the use of scarce resources.

To enable stakeholders to fulfil their social responsibilities, research should embody five conditions of informativeness and social value. The first is importance. Trials should address key evidence gaps. Interventions selected for testing should capture the most promising therapeutic and prophylactic alternatives as judged from reviews of existing evidence and trials. They should aim to detect effects that are realistic but clinically meaningful.

As of this writing, more than 18 clinical trials enrolling more than 75,000 patients have been registered in North America for testing various hydroxychloroquine regimens for COVID-19. This massive commitment concentrates resources on nearly identical clinical hypotheses, creates competition for recruitment, and neglects opportunities to test other clinical hypotheses. Testing different regimens derived from a common clinical hypothesis in uncoordinated protocols increases the probability of false-positive findings due to chance. This also frustrates cross-comparisons and squanders opportunities to evaluate regimens side by side.

The second component is rigorous design. Trials should be designed to detect clinically meaningful effects so that both positive and negative results serve the informational needs of clinicians and health systems. Studies designed to detect massive effects often eschew randomisation or use surrogate end points. Although easily launched, such studies are at high risk for producing inconclusive findings that sow confusion and necessitate further evaluation. The decision to forgo a dummy comparator and use a non-validated surrogate end point, absenteeism, in a study testing use of a tuberculosis vaccine to prevent coronavirus infection jeopardizes the study's ability to clarify the merits of this intervention.

The third component is analytical integrity. Designs should be prespecified in protocols, prospectively registered, and analysed in accordance with pre-specification. A recent study of hydroxychloroquine reported a beneficial effect on clinical primary outcomes in a preprint, whereas registration documents revealed a different study design and a polymerase chain reaction–based primary end point. The glaring discrepancy, a well-known source of bias in trials, was not flagged in some reporting on the trial.

Fourth, trials should be reported completely, promptly, and consistently with prespecified analyses. One reporting challenge present in the best of times, and likely to re-emerge during pandemics, is the deposition of positive findings in preprint servers earlier than nonpositive studies. Another challenge is quality control. Qualified peer reviewers are a scarce resource, and the proliferation of low-quality papers saps the ability of scientists to place findings into context before they are publicised. Some recent trials garnering press coverage did not adhere to well-established reporting standards.

The fifth component is feasibility: Studies must have a credible prospect of reaching their recruitment target and being completed within a time frame where the evidence is still actionable. This condition is in tension with the others because their resource demands under conditions of scarcity create the prospect that research might never be completed. However, making research feasible by relaxing the other four standards contradicts the social justification for research. The system of incentives normally used to align research actors with the public good is imperfect in non-crisis situations and likely to be ineffective in the context of a pandemic. Therefore, to meet the requirement of feasibility, investigators, sponsors, health systems, and regulators have responsibilities to make exceptional efforts to cooperate and collaborate in a way that concentrates resources on a portfolio of studies that satisfy the above conditions.

Sponsors, research consortia, and health agencies should prioritize research approaches that test multiple interventions, foster modularity, and permit timely adaptation. Master protocols enable multiple interventions to be trialled under a common statistical framework, facilitating cross-comparisons and promoting multi-centre collaboration.

Adaptive designs allow flagging interventions to be dropped quickly and promising alternatives to be added with fewer delays than would be incurred from the design and approval of new studies. Seamless trial designs reduce transition time between trial phases and can extend into the provision of care to large numbers of patients.

Individual clinicians should avoid off-label use of unvalidated interventions that might interfere with trial recruitment and resist the urge to carry out uncontrolled, open-label studies. They should instead seek out opportunities to join larger, carefully orchestrated protocols to increase the prospect that high-quality studies will be completed quickly and generate the information needed to advance individual and public health. Academic medical centres can facilitate such coordination by surveying the landscape of ongoing studies and establishing mechanisms for “prioritization review” to triage studies.

The goal would be to incentivise participation in efforts that uphold the criteria outlined here and to foster robust participation in multi-centre studies so that data can be generated from different institutions before their capacity to meet fastidious research requirements is overwhelmed by surging medical demand.

[link url="https://science.sciencemag.org/content/368/6490/476"]Science Magazine report[/link]

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.