The healthcare industry needs to figure out how to use generative AI properly, experts said during a recent seminar. (Credit: Getty Images)
The healthcare industry needs to figure out how to use generative AI properly, experts said during a recent seminar. (Credit: Getty Images)

ChatGPT, once a "novelty chatbot," has emerged as the inevitable future of healthcare. Some say generative AI will "replace doctors," but most experts counter that AI technology still needs human interaction to be used properly.

Generative AI lives up to its name by "creating" something that doesn't exist.” Unlike traditional models that need to learn from massive amounts of "real-world" data, it generates its own data based on a Large Language Model (LLM), which can be used for rare and incurable diseases with limited source data. It is differentiated from IBM's Watson for Oncology, which remains a "library search engine" by offering more flexible options based on the data it generates.

It can be used for structured data and medical records doctors create while communicating with patients in real-time. Naver Cloud Platform has developed medical record products based on generative AI in Korea since 2021.

Some experts say its data analysis and evaluation level is already higher than medical students.

Professor Kim Hwi-young at Yonsei University's School of Biomedical Systems and Information said the medical community must prepare for the paradigm shift that generative AI will bring.(Captured from a real-time broadcasting screen of an online colloquium held by the Korea National Institute for Bioethics Policy)
Professor Kim Hwi-young at Yonsei University's School of Biomedical Systems and Information said the medical community must prepare for the paradigm shift that generative AI will bring.(Captured from a real-time broadcasting screen of an online colloquium held by the Korea National Institute for Bioethics Policy)

Professor Kim Hwi-young of Yonsei University's School of Biomedical Systems and Information explained the “paradigm shift” that generative AI will bring at an online colloquium held by the Korea National Institute for Bioethics Policy (KoNIBP), titled “Social acceptance and issues on the possibility of developing and utilizing large-scale generative AI in healthcare,” on Tuesday.

"Generative AI translates medical knowledge that requires a high level of interpretation and understanding into an easily digestible form. It can analyze clinical content difficult to evaluate for even junior doctors and medical students, let alone ordinary people, and perform ‘groundwork’ used directly in practice or research,” Professor Kim said

‘Know generative AI’s limits and figure out how to use it properly in medicine’

It has limitations, of course. There's the problem of “hallucination,” where generative AI “creates” something that does not exist. There's also the risk of contaminating the medical community's hard-earned databases (DBs) by "splicing together" papers that don't exist and reproducing incorrect knowledge.

"AI cannot understand the problem and evaluate and improve itself," he said. “Eventually, we must recognize the technology's limitations and constantly review and evaluate AI and the data it generates. We must educate to prevent errors and abuse and create guidelines centered on healthcare professionals to ensure safety.”

Kim noted that no matter how good the technology is, it's still a “human endeavor” to use it properly, so the medical community shouldn't treat it as "just one of many technologies" but jointly create the right way to use it.

"Generative AI is a technology that we must use someday. If we turn our backs on it, we risk losing the market to the likes of Google and underutilizing the technology that healthcare needs," Professor Kim said. "It's time for the medical community to start discussing the benefits and risks of AI actively."

Meanwhile, similar views were expressed at a seminar held by the Big Data Clinical Utilization Research Association titled “ChatGPT Medical Utilization Research.”

Experts at the seminar said that the medical community needs to take an active role in unlocking the limitations of current generative AI and settling on a technology that can be of "real help for patients."

Professor Kim Dae-Cheol of the Digital Health Department at Samsung Advanced Institute of Convergence Medicine of Samsung Medical Center said using generative AI in healthcare is at the "tasting" stage.

Medical professionals provide data and "observe how ChatGPT digests the data," he said.

Part of the problem is securing the data resources needed to advance generative AI. “It is difficult for a single medical institution to support a large-scale AI model using internal hospital resources. Opening hospital medical data to the cloud to utilize external resources and infrastructure is also risky," Professor Cha said.

Within these limitations, he emphasized that "the medical community needs to work together" to find the right place for generative AI in healthcare.

Professor Yoon Deok-Yong of Yonsei University's School of Biomedical Systems and Information said that generative AI "performs as well as existing models and provides better usability, but it remains to be seen whether it can completely surpass existing models."

He added that generative AI learns by watching people, after all.

“Generative AI can ‘easily repeat human mistakes,” Professor Yoon said. “The challenge is integrating the use of generative AI in the hospital. It’s time to start thinking about how to use technology in a way that helps patients.”

 

Copyright © KBR Unauthorized reproduction, redistribution prohibited