back to top
Wednesday, 30 April, 2025
HomeArtificial Intelligence (AI)California nurses protest ‘untested’ AI tools

California nurses protest ‘untested’ AI tools

Union nurses in the US are rallying against the use of artificial intelligence (AI) tools which they described as “untested” and “unregulated” during a protest outside the Kaiser Permanente San Francisco Medical Centre last week.

Despite reports that health systems are investing millions of dollars in AI technologies, Michelle Gutierrez-Vo, RN, BSN, a charge nurse at Kaiser Permanente Fremont Medical Centre and president of the California Nurses Association (CNA), said that neither the hospitals nor the tech industry had proved that these tools improve the quality of patient care.

Medpage Today reports that in December 2023, Kaiser Permanente announced it had awarded grants of up to $750 000 to five healthcare organisations geared toward projects to deploy AI and machine-learning algorithms “to enhance diagnostic decision-making in healthcare”.

“Just as there’s oversight for medications … when it comes to being used for patients, this is just as dangerous,” Gutierrez-Vo said of AI tools used in health systems, including those that help staff hospital units, determine resource needs, and filter messages to clinicians.

She added that hospitals rushing to launch these “untested and unregulated AI technologies” was what motivated the hundreds of nurses to attend the protest.

Their hope was that regulators and the public would join the CNA in demanding that developers and employers prove these new systems are safe, effective, and equitable, she said.

While lawmakers and regulators have begun to focus on the development and implementation of AI in healthcare, and research has shown it can be an effective clinical support tool in certain situations, experts warn that generative AI tools still need human oversight because of the risks of using the technology in healthcare settings, and its tendency to introduce biased or incorrect information into clinical decision-making.

Douglas Johnson, MD, MSCI, a medical AI researcher from Vanderbilt University Medical Centre in Nashville, told MedPage Today that the standard approach to implementing this technology has been to first test it with a small team within an institution.

“You can certainly implement it too soon or implement it too broadly,” he said. “There are many ways it could potentially go wrong.”

AI technology offers several intriguing benefits, such as decreasing burnout, he added, but implementing the technology should balance those benefits with patient safety, as well as buy-in from nurses, physicians, and administrators.

Gutierrez-Vo said one of the CNA’s biggest concerns is systems within electronic health records that “ration care by under-predicting how sick a patient might become”.

Kaiser Permanente, like many health systems, uses Epic, a popular electronic health record vendor that has a patient acuity system wich assigns patients a number to reflect how ill they are.

But it doesn’t factor in a patient’s changing mental status and language barriers, or account for a patient whose mobility may have declined, Gutierrez-Vo said.

“There are many different nuances in the human condition that only eyes and ears, and a trained, specialised nurse, can tell … how much of their time was needed to make sure this patient was safe, and how much more will be needed for the next shift,” she said.

In 2019, Kaiser Permanente Northern California launched the Desktop Medicine Program, which uses natural language processing algorithms to tag messages with category labels and route them to the appropriate respondents.

The system was found to have funnelled 31.9% of more than 4.7m patient messages to a “regional team” of medical assistants, tele-service representatives, pharmacists, and other physicians who “resolved” those messages before they reached individual physician inboxes.

But Gutierrez-Vo said that she finds the messaging system problematic.

If a patient who recently had a heart attack messages his physician asking to refill a nitroglycerine prescription, that message should be flagged as urgent. In this new system, it will be categorised as a medication request, deemed “non-urgent”, and directed to a pharmacist, despite signalling a “life-or-death situation”, she explained.

While nursing unions have negotiated “enforceable language” in their contracts requiring that they be notified of new technologies and modifications, employers aren’t always complying with those contracts, she told Medpage Today.

If nurses call for a “hard stop”, because, for example, the staffing that results from these technologies appears to be inappropriate, management is accountable for making those changes immediately.

Kaiser Permanente did not immediately respond to a request for comment.

 

Medpage Today article – California Nurses Rally Against AI Tools in Healthcare (Open access)

 

See more from MedicalBrief archives:

 

AI can’t replicate this crucial aspect of practising medicine

 

AI helps drugmakers slash clinical trial costs and time

 

Growing role for AI in everyday medical interactions

 

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.