Poor animal study design and reporting thwarts the ethical review of proposed human drug trials, according to a study led by researchers at Hannover Medical School in Germany in cooperation with researchers from McGill University, Canada.
The study analysed the descriptions of animal studies found in “investigator brochures” – the documents used by regulatory authorities and ethics committees to assess the potential efficacy of drugs that are being tested in patients for the first time.
Independent assessments of animal evidence are key to ensuring that patients are not exposed to undue risk when volunteering in trials. Based on documents obtained from three prominent German medical research centres, the study authors recommend that regulators need to develop standards to ensure the rigorous design and reporting of preclinical animal studies when trials of new drugs are launched.
Strikingly, less than one-fifth of investigator brochures referenced animal studies that had been through a peer-reviewed publication process. Less than 20% of animal studies that tested the efficacy of the new drug described the use of simple techniques, like randomisation blinding or sample size calculation, that can reduce the effects of bias. And worryingly, of the more than 700 animal studies that the authors found in the investigator brochures, only 4% did not show positive effects of treatment.
“Our analysis shows that the vast majority of these documents lack the information needed to systematically appraise the strength of evidence supporting trials,” said Dr Daniel Strech, professor for bioethics at Hannover Medical School and senior author of the study.
“We were also struck by the rarity of ‘negative’ animal studies in investigator brochures”, said Jonathan Kimmelman, professor for bioethics at McGill University and co-author.
“With a median group size of 8 animals, these studies had limited ability to measure treatment effects precisely. Chance alone should have resulted in more studies being negative- the imbalance strongly suggests publication bias” said Susanne Wieschowski, a postdoctoral fellow in Strech’s team.
“Why do regulatory agencies and other bodies involved in risk-benefit assessment for early human research accept the current situation?” asks Strech. “Why do they not complain about the lack of information needed to critically appraise the rigor of the pre-clinical efficacy studies and about the concerning lack of efficacy studies demonstrating no effects?”
Human protection policies require favorable risk–benefit judgments prior to launch of clinical trials. For phase I and II trials, evidence for such judgment often stems from preclinical efficacy studies (PCESs). We undertook a systematic investigation of application materials (investigator brochures [IBs]) presented for ethics review for phase I and II trials to assess the content and properties of PCESs contained in them. Using a sample of 109 IBs most recently approved at 3 institutional review boards based at German Medical Faculties between the years 2010–2016, we identified 708 unique PCESs. We then rated all identified PCESs for their reporting on study elements that help to address validity threats, whether they referenced published reports, and the direction of their results. Altogether, the 109 IBs reported on 708 PCESs. Less than 5% of all PCESs described elements essential for reducing validity threats such as randomization, sample size calculation, and blinded outcome assessment. For most PCESs (89%), no reference to a published report was provided. Only 6% of all PCESs reported an outcome demonstrating no effect. For the majority of IBs (82%), all PCESs were described as reporting positive findings. Our results show that most IBs for phase I/II studies did not allow evaluators to systematically appraise the strength of the supporting preclinical findings. The very rare reporting of PCESs that demonstrated no effect raises concerns about potential design or reporting biases. Poor PCES design and reporting thwart risk–benefit evaluation during ethical review of phase I/II studies.
Susanne Wieschowski, William Wei Lim Chin, Carole Federico, Sören Sievers, Jonathan Kimmelman, Daniel Strech
[link url="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2004879"]PLOS Biology abstract[/link]