Advertisements

Dartmouth Health Study Highlights Risks of AI Misleading Medical Imaging Models

by Kaia

Lebanon, NH – As artificial intelligence (AI) continues to reshape the healthcare landscape, its application in fields like diagnostic imaging promises to enhance the accuracy and speed of medical evaluations. However, a recent study led by Dartmouth Health researchers raises significant concerns about the potential pitfalls of relying on AI in medical research.

Advertisements

The study, conducted in collaboration with the Veterans Affairs Medical Center in White River Junction, VT, and published in Nature’s Scientific Reports, reveals how AI models—while highly accurate—can arrive at misleading conclusions due to a phenomenon known as “shortcut learning.” The research examined how AI models, when applied to knee X-rays from the National Institutes of Health-funded Osteoarthritis Initiative, can produce predictions based on irrelevant data, such as whether patients consumed refried beans or beer. These associations, though statistically accurate, have no medical relevance and expose the potential for AI to exploit unintended patterns in the data.

Advertisements

“While AI has the potential to transform medical imaging, we must be cautious,” said Dr. Peter L. Schilling, an orthopedic surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center (DHMC) and senior author of the study. “AI can detect patterns that are invisible to the human eye, but not all of these patterns are meaningful or reliable. It’s essential to recognize these risks to avoid drawing misleading conclusions and maintain scientific integrity.”

Advertisements

The study highlights the tendency of AI algorithms to rely on confounding variables—such as differences in X-ray equipment or clinical site markers—rather than clinically relevant features to make predictions. Despite efforts to eliminate these biases, the AI models continued to uncover hidden data patterns, raising concerns about the models’ ability to make accurate clinical predictions without human oversight.

Advertisements

Brandon G. Hill, a machine learning scientist at DHMC and co-author of the study, further emphasized the importance of scrutinizing AI models in medical research. “This issue goes beyond bias based on race or gender. We found that the algorithm could even predict the year an X-ray was taken. These patterns can be deeply misleading,” Hill said. “When we prevent one type of bias, the model simply adapts and learns a different, previously overlooked pattern. Researchers need to be aware of how easily these unintended associations can distort conclusions.”

The findings underscore the critical need for rigorous evaluation standards when using AI in medical research. Over-reliance on AI models without a thorough understanding of their limitations could lead to erroneous clinical insights and flawed treatment strategies.

Hill added, “The burden of proof becomes much higher when using AI models to discover new patterns in medicine. Part of the problem is that we often assume the model ‘sees’ things the way we do. But it doesn’t. The model doesn’t have logic or reasoning in the human sense—it’s almost like dealing with an alien intelligence.”

The study, which also includes contributions from Frances L. Koback, a third-year student at Dartmouth’s Geisel School of Medicine, calls for greater vigilance in the integration of AI into medical practice to ensure that such technologies enhance, rather than hinder, patient care.

You Might Be Interested In:

Advertisements

YOU MAY ALSO LIKE

© 2023 Copyright winemixture.com