Potential jurors favor use of AI in precision medicine

Physicians who follow artificial intelligence (AI) advice may be considered less liable for medical malpractice than is commonly thought, according to a new study of potential jury candidates in the U.S.

The study provides the first data related to physicians’ potential liability for using AI in personalized medicine, which can often deviate from standard care. “New AI tools can assist physicians in treatment recommendations and diagnostics, including the interpretation of medical images,” remarked Kevin Tobia, JD, PhD, assistant professor of law at the Georgetown University Law Center, in Washington D.C. “But if physicians rely on AI tools and things go wrong, how likely is a juror to find them legally liable? Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision making of lay juries. Our study is the first to focus on that last aspect, studying potential jurors’ attitudes about physicians who use AI.”

To determine potential jurors’ judgments of liability, researchers conducted an online study of a representative sample of 2,000 adults in the U.S. Each participant read one of four scenarios in which an AI system provided a drug dosage treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard drug dosage) and the physician’s decision (to accept or reject the AI recommendation). In all scenarios, the physician’s decision subsequently caused harm to the patient.

Study participants then evaluated the physician’s decision by assessing whether the treatment decision was one that could have been made by “most physicians” and “a reasonable physician” in similar circumstances. Higher scores indicated greater agreement and, therefore, lower liability.

Results from the study showed that participants used two different factors to evaluate physicians’ utilization of medical AI systems: (1) whether the treatment provided was standard and (2) whether the physician followed the AI recommendation. Participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. However, if a physician received a nonstandard AI recommendation, he or she was not judged as safer from liability by rejecting it.

While prior literature suggests that laypersons are very averse to AI, this study found that they are, in fact, not strongly opposed to a physician’s acceptance of AI medical recommendations. This finding suggests that the threat of a physician’s legal liability for accepting AI recommendations may be smaller than is commonly thought.

In an invited perspective on the JNM article, W. Nicholson Price II and colleagues noted, “Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, the hospitals that implement AI tools for physician use and the developers who create those tools in the first place. Tobia et al.’s study should serve as a useful beachhead for further work to inform the potential for integrating AI into medical practice.”

The study has been published in The Journal of Nuclear Medicine (JNM).

Subscribe to our newsletter

Related articles

Towards an AI diagnosis like the doctor's

Towards an AI diagnosis like the doctor's

Researchers show how they can make an AI show how it's working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

AI outperform doctors: Experts express concerns

AI outperform doctors: Experts express concerns

Many studies claiming that AI is as good as (or better than) human experts at interpreting medical images are of poor quality and are arguably exaggerated, warn researchers in The BMJ.

COVID-19: AIs shortcuts lead to misdiagnosis

COVID-19: AIs shortcuts lead to misdiagnosis

Researchers discovered that AI models have a tendency to look for shortcuts. In the case of AI-assisted disease detection, these shortcuts could lead to diagnostic errors if deployed in clinical settings.

Microfluidics and AI microscopy measure hemoglobin

Microfluidics and AI microscopy measure hemoglobin

Researchers at the Indian Institute of Science and SigTuple Technologies have developed a method to measure hemoglobin levels in small-volume blood samples.

Biomedical research: deep learning outperforms machine learning

Biomedical research: deep learning outperforms machine learning

Deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

AIs detect diabetic eye disease inconsistently

AIs detect diabetic eye disease inconsistently

Although some artificial intelligence software tested reasonably well, only one met the performance of human screeners.

Deep learning platform accurately diagnoses dystonia

Deep learning platform accurately diagnoses dystonia

Researchers have developed a unique diagnostic tool that can detect dystonia from MRI scans, the first technology of its kind to provide an objective diagnosis of the disorder.

The embedded ethics approach in AI development

The embedded ethics approach in AI development

Alena Buyx, Professor of Ethics in Medicine and Health Technologies at TUM, explains the embedded ethics approach.

AI enables rapid ultrasound COVID-19 testing

AI enables rapid ultrasound COVID-19 testing

Establishing whether a patient is suffering from COVID-19 within a few minutes is possible using ultrasound machines that are enhanced with artificial intelligence.

Popular articles

Subscribe to Newsletter