Wednesday, 24 April, 2024
HomeA Practitioner's Must ReadAI outperforms humans in creating cancer treatments — but doctors balk

AI outperforms humans in creating cancer treatments — but doctors balk

The impact of deploying Artificial Intelligence (AI) for radiation cancer therapy in a real-world clinical setting has been tested by Canadian researchers in a unique study involving physicians and their patients, found a study published in Nature Medicine.

A team of Canadian researchers directly compared physician evaluations of radiation treatments generated by an AI machine learning (ML) algorithm to conventional radiation treatments generated by humans. They found that in the majority of the 100 patients studied, treatments generated using ML were deemed to be clinically acceptable for patient treatments by physicians.

Overall, 89% of ML-generated treatments were considered clinically acceptable for treatments, and 72% were selected over human-generated treatments in head-to-head comparisons to conventional human-generated treatments.

Moreover, the ML radiation treatment process was faster than the conventional human-driven process by 60%, reducing the overall time from 118 hours to 47 hours. In the long term this could represent a substantial cost savings through improved efficiency, while at the same time improving quality of clinical care, a rare win-win.

But while the ML treatments were overwhelmingly preferred when evaluated outside the clinical environment, as is done in most scientific works, physician preferences for the ML-generated treatments changed when the chosen treatment, ML or human-generated, would be used to treat the patient. In that situation, the number of ML treatments selected for patient treatment was significantly reduced issuing a note of caution for teams considering deploying inadequately validated AI systems.

The study team was led by Drs Chris McIntosh, Leigh Conroy, Ale Berlin, and Tom Purdie.

"We have shown that AI can be better than human judgement for curative-intent radiation therapy treatment. In fact, it is amazing that it works so well," says McIntosh, of the Peter Munk Cardiac Centre, Techna Institute, and chair of Medical Imaging and AI at the Joint Department of Medical Imaging and University of Toronto. "But a major finding is what happens when you actually deploy it in a clinical setting in comparison to a simulated one."

Adds Purdie, medical physicist, Princess Margaret Cancer Centre: "There has been a lot of excitement generated by AI in the lab, and the assumption is that those results will translate directly to a clinical setting. But we sound a cautionary alert in our research that they may not.

"Once you put ML-generated treatments in the hands of people who are relying upon it to make real clinical decisions about their patients, that preference towards ML may drop. There can be a disconnect between what's happening in a lab-type of setting and a clinical one." Purdie is also an Associate Professor, Department of Radiation Oncology, University of Toronto.

In the study, treating radiation oncologists were asked to evaluate two different radiation treatments – either ML or human-generated ones – with the same standardized criteria in two groups of patients who were similar in demographics and disease characteristics.

The difference was that one group of patients had already received treatment so the comparison was a 'simulated' exercise. The second group of patients were about to begin radiation therapy treatment, so if AI-generated treatments were judged to be superior and preferable to their human counterparts, they would be used in the actual treatments.

Oncologists were not aware of which radiation treatment was designed by a human or a machine. Human-generated treatments were created individually for each patient as per normal protocol by the specialized Radiation Therapist. In contrast, each ML treatment was developed by a computer algorithm trained on a high-quality, peer-reviewed data base of radiation therapy plans from 99 patients previously treated for prostate cancer at Princess Margaret.

For each new patient, the ML algorithm automatically identifies the most similar patients in the data base, using learned similarity metrics from thousands of features from patient images, and delineated target and healthy organs that are a standard part of the radiation therapy treatment process. The complete treatment for a new patient is inferred from the most similar patients in the data base, according to the ML model.

Although ML-generated treatments were rated highly in both patient groups, the results in the pre-treatment group diverged from the post-treatment group.

In the group of patients that had already received treatment, the number of ML-generated treatments selected over human ones was 83%. This dropped to 61% for those selected specifically for treatment, prior to their treatment.

"In this study, we're saying researchers need to pay attention to a clinical setting," says Purdie. "If physicians feel that patient care is at stake, then that may influence their judgement, even though the ML treatments are thoroughly evaluated and validated."

Conroy, Medical Physicist at Princess Margaret, points out that following the highly successful study, ML-generated treatments are now used in treating the majority of prostate cancer patients at Princess Margaret.

That success is due to careful planning, judicious stepwise integration into the clinical environment, and involvement of many stakeholders throughout the process of establishing a robust ML program, she explains, adding that the program is constantly refined, oncologists are continuously consulted and give feedback, and the results of how well the ML treatments reflect clinical accuracy are shared with them.

Study details:

Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer

Authors: Chris McIntosh, Leigh Conroy, Michael C. Tjong, Tim Craig, Andrew Bayley, Charles Catton, Mary Gospodarowicz, Joelle Helou, Naghmeh Isfahanian, Vickie Kong, Tony Lam, Srinivas Raman, Padraig Warde, Peter Chung, Alejandro Berlin, Thomas G. Purdie.

Published in Nature Medicine, 7 June 2021

Abstract

Machine learning (ML) holds great promise for impacting healthcare delivery; however, to date most methods are tested in ‘simulated’ environments that cannot recapitulate factors influencing real-world clinical practice. We prospectively deployed and evaluated a random forest algorithm for therapeutic curative-intent radiation therapy (RT) treatment planning for prostate cancer in a blinded, head-to-head study with full integration into the clinical workflow. ML- and human-generated RT treatment plans were directly compared in a retrospective simulation with retesting (n = 50) and a prospective clinical deployment (n = 50) phase. Consistently throughout the study phases, treating physicians assessed ML- and human-generated RT treatment plans in a blinded manner following a priori defined standardized criteria and peer review processes, with the selected RT plan in the prospective phase delivered for patient treatment. Overall, 89% of ML-generated RT plans were considered clinically acceptable and 72% were selected over human-generated RT plans in head-to-head comparisons. RT planning using ML reduced the median time required for the entire RT planning process by 60.1% (118 to 47 h). While ML RT plan acceptability remained stable between the simulation and deployment phases (92 versus 86%), the number of ML RT plans selected for treatment was significantly reduced (83 versus 61%, respectively). These findings highlight that retrospective or simulated evaluation of ML methods, even under expert blinded review, may not be representative of algorithm acceptance in a real-world clinical setting when patient care is at stake.

 

Full study on Nature Medicine report (Open access)

 

See also from the MedicalBrief archives:

 

Google Health using AI to improve breast cancer screening

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.