Korean doctors are positive about using generative artificial intelligence ChatGPT in the medical field but think its role should be limited to assisting in diagnosis and prescription.

Korean doctors are optimistic about using ChatGPT in medical fields but think they should restrict its role.
Korean doctors are optimistic about using ChatGPT in medical fields but think they should restrict its role.

That is because a basis has yet to be formed to trust medical information acquired by ChatGPT, according to a recent survey.

InterMD Company, a knowledge- and information-sharing community platform exclusive for physicians, said Tuesday that 56.8 percent of respondents gave positive answers to using ChatGPT in medical fields based on its opinion poll results.

However, the responding doctors thought they should limit its use to aiding diagnosis and prescription. One-thousand-and-eight physicians participated in the survey conducted from April 25 to 26 (with a 95 percent confidence rate and plus and minus 1.25 percent sampling error).

The result showed that 88.5 percent of respondents knew about ChatGPT, but only 38.4 percent used it. Among those who used it, 71.8 percent said they earned a satisfactory answer.

The 56.8 percent of surveyed doctors who gave positive replies cited various reasons, such as reducing workload as it can perform repetitive jobs, like filling out documents (28.8 percent), curtailing decision-making time by analyzing various clinical data (22.5 percent), and simplifying treatment process (10.5 percent).

In contrast, 13.5 percent were negative about using Chat GPT in medical fields (the other 27.9 percent said they were not sure). The most common reason was too low of medical evidence and reliability to be used in the medical field (in multiple replies).

More specifically, 24.4 percent pointed to credibility problems, 18.5 percent said they could not know the criteria and grounds for medical judgment, and 27.4 percent cited the whereabouts of responsibility for medical judgment and results. The other 8.5 percent expressed concern about ethical and social problems that can be caused, although there may be no technical problems.

The most common opinion (43.8 percent) was that it should be used only as a diagnostic and prescription aid. Besides, 19.2 percent said it should learn reliable and accurate medical information. In comparison, 14.8 percent opined that it must be used for pigeonholing information without making a medical judgment. Some 10 percent of respondents said there must be a process where experts verify information generated by ChatGPT.

“It will be difficult for AI to replace doctors, but parts of treatment patterns will likely change as it can assist doctors,” said an internal medicine specialist.

A pediatrician said, “If AI’s domain expands, doctors’ work will dwindle. However, as machines cannot take responsibility for medical judgments, new domains will likely appear for doctors.”

InterMD Company CEO Lee Young-do said, “As shown in the survey results, ChatGPT has significance as an “assistant role” in doctors’ jobs. It will be necessary to make continuous efforts to reduce its errors.”

ChatGPT is an AI model specialized to language developed by U.S. developer Open AI and released last Dec. 1.

Copyright © KBR Unauthorized reproduction, redistribution prohibited