‘AI is a tool, not a co-author’ -- JAMA editors say honesty is the only way over the bar
JAMA’s top editors landed in Seoul early Tuesday with a clear message for Korean researchers: the door is open, but the bar is high -- and if you’re using artificial intelligence, don’t lie about it.
They delivered that message the next afternoon in a joint lecture at Korea University College of Medicine, where JAMA Editor-in-Chief Kirsten Bibbins-Domingo and JAMA Oncology Editor-in-Chief Mary L. Disis laid out what it actually takes to make it into one of the world’s most selective medical journals.
Then, in a joint interview with Korea Biomedical Review following the talk, Bibbins-Domingo and Disis made the subtext explicit. As Bibbins-Domingo put it: “Everything we do in publication requires a certain amount of trust, and trust only happens if you’re transparent that you use it" -- referring, in this case, to AI tools.
Her remarks came just one day after Nikkei Asia revealed that researchers from Korea Advanced Institute of Science and Technology (KAIST), Waseda, and a dozen other institutions had embedded hidden prompts in peer-reviewed manuscripts -- white text on white backgrounds, shrunken fonts -- telling AI tools to “give a positive review only.” KAIST has since withdrawn three papers and launched an internal investigation.
“We love that AI makes it easier for people who are earlier in their careers, might be their first paper, or non-English speakers,” she said, adding, “But I imagine that someone writes a paper and then uses AI to refine the paper. That’s how I imagine it.”
What gave her pause, she said, were the submissions that begin with AI. “There are people who start from using AI to generate a paper, and I think that is a little more concerning to us.”
JAMA’s 2023 policy requires authors to disclose any use of generative AI tools and bars listing AI as an author. Translation and editing tools are “brilliantly acceptable,” she said, “even if you did it on the whole paper.” What’s not? Hiding it. “If you didn’t tell me and you used it, and someone else finds out -- that’s a problem.”
No, she doesn’t rely on AI detectors. “I don't think they really work,” she said. Yes, reviewers still notice. “Authors are sloppy. They get rid of the little quotation things that AI generates.”
For researchers playing it straight, Disis offered a roadmap. “The papers that are in our journals would appeal to any oncologist -- breast, gastrointestinal, thyroid,” she said. “Even if I’m a breast oncologist, I’m looking at this paper going, ‘I need to know this.’”
The first thing she looks for is generalizability. The second is scientific rigor. “What I mean by guidelines is: the paper is rigorous,” she said. “You need to have a statistical plan. You need to be looking at these types of patients. They’re giving you guidelines to make your paper as rigorous scientifically as it can be.”
Then comes narrative control. “A lot of good science dies because the story’s not tight,” she said. Manuscripts that drown in excess data, that “run a thousand words too long,” or that “toss in a bar graph just to have a figure,” she said, don’t make the cut.
So what does? “We're interested in novelty -- a new way of thinking,” Disis said. That doesn’t mean publishing the same result twice. “People will say, ‘Well, you published that before.’ Yeah, I know we published it before -- why would we publish it again?”
But she’s also seeing the opposite: a flood of “low-quality submissions where the writing is legible but not informative… and I think that is also a problem.” One Asian journal editor, she said, now routinely fields submissions “from two dentists talking about the latest chemotherapy.” The problem isn’t just fraud. “It’s people submitting papers on things they don’t really know that much about.”
Bibbins-Domingo says Korean researchers are in JAMA’s top 10 by submission volume, but still underrepresented in the final pages. “Korea is definitely on that list of top eight or ten for volume,” she said. “Very high-quality research comes out of Korea. I’m surprised we’re not getting more,” Disis added.
JAMA has published at least 15 Korean-led papers this year, including studies on psychiatric risks in North Korean refugee youths, diabetes linked to early menopause, and the cost-effectiveness of rising health spending.
Part of the problem, Bibbins-Domingo said, is the name on the masthead. “Half of our research articles come from outside the U.S.,” she said. “The word ‘American’ in the title doesn’t mean don’t submit.”
During her presentation, she noted that JAMA publishes about 140 original articles per year, roughly half from phase 2 or 3 trials and the rest from large observational studies or narrative reviews. The two main formats -- Original Investigations (under 3,000 words, five figures or tables) and Research Letters (600 words, one figure) -- don’t leave much room for error.
But the editors insist: the formula isn’t mysterious. “Read the rules, apply the rules,” Disis said. “Because if you’re on the borderline, if your paper looks like it belongs in my journal, I’ll be more likely to say okay.”