HomeArtificial Intelligence (AI)AI beats doctors in Harvard emergency triage diagnosis trial

AI beats doctors in Harvard emergency triage diagnosis trial

AI systems annihilated human doctors in high-pressure emergency medicine triage, diagnosing more accurately in the potentially life and death moments when people are first taken to hospital, according to a Harvard study, reports The Guardian.

The results were described by independent experts as showing “a genuine step forward” in the clinical reasoning of AIs and came as part of trials that tested the responses of hundreds of doctors against an AI.

The authors said the results, published in the journal Science, showed large language models (LLMs) “have eclipsed most benchmarks of clinical reasoning”.

One experiment focused on 76 patients who arrived at the emergency room of a Boston hospital. An AI and a pair of human doctors were each given the same standard electronic health record to read – typically including vital sign data, demographic information and a few sentences from a nurse about why the patient was there.

The AI identified the exact or very close diagnosis in 67% of cases, beating the human doctors, who were right only 50%-55% of the time.

It showed the AIs’ advantage was particularly pronounced in triage circumstances requiring rapid decisions with minimal information. The diagnosis accuracy of the AI – OpenAI’s o1 reasoning model – rose to 82% when more detail was available, compared with the 70%-79% accuracy achieved by the expert humans, though this difference was not statistically significant.

It also outperformed a larger cohort of human doctors when asked to provide longer term treatment plans, such as providing antibiotics regimes or planning end-of-life processes.

The AI and 46 doctors were asked to examine five clinical case studies and the computer made significantly better plans, scoring 89% compared with 34% for humans using conventional resources, such as search engines.

But it is not curtains for emergency doctors yet, the researchers said. The study only tested humans against AIs looking at patient data that can be communicated via text. The AI’s reading of signals, such as the patient’s level of distress and their visual appearance, were not tested.

That means the AI was performing more like a clinician producing a second opinion based on paperwork.

“I don’t think our findings mean that AI replaces doctors,” said Arjun Manrai, one of the lead authors of the study who heads an AI lab at Harvard Medical School. “I think it does mean that we’re witnessing a really profound change in technology that will reshape medicine.”

Dr Adam Rodman, another lead author and a doctor at Boston’s Beth Israel Deaconess Medical Centre where the study took place, said AI LLMs were among “the most impactful technologies in decades”. Over the next decade, he said, AI would not replace physicians but join them in a new “triadic care model … the doctor, the patient, and an artificial intelligence system”.

In one case in the Harvard study, a patient presented with a blood clot to the lungs and worsening symptoms. Human doctors thought the anti-coagulants were failing, but the AI noticed something the humans did not: the patient’s history of lupus meant this might be causing the inflammation of the lungs. The AI was proved correct.

Nearly one in five US physicians is already using AI to assist diagnosis, according to research published last month. In the UK, 16% of doctors are using the tech daily and a further 15% weekly, with “clinical decision-making” being one of the most common uses, according to a recent Royal College of Physicians survey.

The UK doctors’ biggest concerns were AI error and liability risks. Billions are being invested in AI healthcare companies, but questions remain about the consequences of AI error.

“There is not a formal framework right now for accountability,” said Rodman, who also stressed patients ultimately “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions”.

Professor Ewen Harrison, co-director of the University of Edinburgh’s centre for medical informatics, said the study was important and showed that “these systems are no longer just passing medical exams or solving artificial test cases”.

“They are starting to look like useful second-opinion tools for clinicians, particularly when it is important to consider a wider range of possible diagnoses and avoid missing something important.”

Dr Wei Xing, an assistant Professor at the University of Sheffield’s school of mathematical and physical sciences, said some of the other findings suggested doctors may unconsciously defer to the AI’s answer rather than thinking independently.

“This tendency could grow more significant as AI becomes more routinely used in clinical settings,” he said.

He also highlighted the lack of information about which patients the AI was worse at diagnosing and whether it struggled more with elderly patients or non-English speakers.

He said: “It does not demonstrate that AI is safe for routine clinical use, nor that the public should turn to freely available AI tools as a substitute for medical advice.”

Study details

Performance of a large language model on the reasoning tasks of a physician

Peter Brodeur, Thomas Buckley, Zahir Kanjee et al.

Published in Science on 30 April 2026

Abstract

More than 65 years ago, complex clinical diagnostic reasoning cases were introduced as the gold standard for the evaluation of expert medical computing systems, a standard that has held ever since. In this study, we report the results of a physician evaluation of a large language model (LLM) on challenging clinical cases across five experiments with a baseline of hundreds of physicians. We then report a real-world study comparing human expert and artificial intelligence (AI) second opinions in randomly selected patients in the emergency room of a major tertiary academic medical center. In all experiments, the LLM outperformed physician baselines and displayed continued improvement from prior generations of AI clinical decision support. Our study suggests that LLMs have eclipsed most benchmarks of clinical reasoning, motivating the urgent need for prospective trials.

 

Science article – Performance of a large language model on the reasoning tasks of a physician (Open access)

 

The Guardian article – AI outperforms doctors in Harvard trial of emergency triage diagnoses (Open access)

 

See more from MedicalBrief archives:

 

AI steps in to detect TB

 

AI chatbots outstrip doctors in diagnoses – US randomised study

 

ChatGPT diagnoses child’s illness after 17 doctors fail

 

AI system accurately detects key findings in chest X-rays of pneumonia patients within 10 seconds

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.