Does artificial intelligence ease—or amplify—the workload?
When artificial intelligence (AI) entered medical diagnostics, it was heralded as a solution to the inefficiencies and inaccuracies of human interpretation. But at a forum hosted by the Korean Society of Radiology last Friday, a more cautious narrative emerged—one less about breakthroughs and more about the complications of integrating AI into clinical practice.
“We must critically examine whether diagnostic support AI is truly helping patients,” said Professor Park Seong-ho of Radiology at Asan Medical Center. “So far, there are few examples of AI demonstrating significant improvements in clinical practice.”
Park, who also serves as editor-in-chief of the Korean Journal of Radiology, has long been skeptical of the hype surrounding AI in medicine. His concerns, grounded in the day-to-day messiness of hospital life, offer a stark contrast to the optimism often found in academic papers and industry reports.
AI, as it turns out, is good at controlled environments—lab conditions and FDA trials where variables are tightly managed. But real hospitals are neither controlled nor predictable. Medical data, Park explained, is shaped by the quirks of individual practitioners, the idiosyncrasies of institutions, and the shifting nature of patient populations. “This creates fundamental limitations in generalizing AI performance,” he said.
Park pointed to a U.S. case study at the University of Wisconsin-Madison, published in the American Journal of Neuroradiology, as an example. A diagnostic AI system that achieved 91.7 percent sensitivity and 88.6 percent specificity in FDA trials saw its sensitivity plummet to 54.9 percent in real-world use, misidentifying fractures in CT scans. “AI is only as good as the environment it’s designed for—and most environments are far from perfect,” Park said.
Burnout in the age of algorithms
AI has also been touted as a solution to radiologist burnout, but recent data suggests it might do the opposite.
A November 2024 study published in JAMA surveyed 6,700 Chinese radiologists and uncovered an unexpected correlation: those who regularly used AI reported slightly higher rates of burnout—40.9 percent compared to 38.6 percent among those who used AI sparingly.
The findings challenged a key assumption about AI’s role in reducing workload. Instead of alleviating strain, the responsibility of verifying AI-generated results appeared to add a layer of cognitive burden, leaving radiologists more fatigued than before.
“While AI can improve sensitivity and specificity when used by specialists, it doesn’t necessarily double efficiency,” said Professor Choi Joon-il of the Catholic University of Korea Seoul St. Mary’s Hospital. He explained that radiologists still have to verify each AI-flagged finding, often increasing overall interpretation time.
A study from Korea University College of Medicine, published in European Radiology, reinforced this point. In a controlled study, four radiology residents analyzed 3,047 chest X-rays twice, both with and without AI assistance. While AI improved diagnostic metrics like sensitivity and specificity, it also added an average of 2.96 to 10.27 seconds per case.
The numbers may seem trivial, but in a busy hospital setting, those extra seconds snowball into hours, contributing to what Choi called “digital fatigue."
AI’s role: a specialist’s tool or generalist’s crutch?
AI’s potential role is still debated. Some argue it could empower general practitioners by bridging the knowledge gap between them and specialists. But critics caution that AI may inadvertently encourage overreliance.
Park cited a 2023 study from JAMA that highlighted potential risks associated with AI in medical diagnostics. The study found that general practitioners interpreting chest X-rays flagged by AI as pneumonia cases saw their diagnostic accuracy drop from 73 percent to 61.7 percent when the AI’s suggestions were false. For non-specialists, trusting AI too readily can lead to errors that compound over time.
“Humans and AI have different strengths, but they are not necessarily complementary,” Park noted. AI might not tire, but it also lacks the intuition of a seasoned radiologist—the ability to see patterns invisible to an algorithm. Striking the right balance between human expertise and AI support, he argued, is crucial, especially in life-and-death situations.
Related articles
- Korean doctors consider adoption of subcutaneous cancer therapies amid global trends
- Heuron’s diagnosis-assisting AI solution for Parkinson's shows over 90% accuracy
- Lunit AI software proves early breast cancer detection, reduced workload for radiologists
- ‘Active discussion needed to unlock AI’s potential in overcoming nearsightedness’
- Radiologists caution against reimbursing digital and AI medical devices
- As AI takes over inpatient monitoring, tech race is on in medical fields
- University hospital professors feel deprived as pay gap with salaried doctors grows
- [From the Scene] Korean medtech unveils AI triage, diagnostics, and more at KIMES 2025
- Acryl's AI system for depression won Korea’s 1st approval after trainees broke down from emotional toll
- ATsens secures US supply deal for wearable ECG device AT-Patch
