A Doctor Published Several Research Papers With Breakneck Speed. ChatGPT Wrote Them All.

Photo Illustration by Erin O’Flynn/The Daily Beast/Getty Images and SSRN
Photo Illustration by Erin O’Flynn/The Daily Beast/Getty Images and SSRN
  • Oops!
    Something went wrong.
    Please try again later.

A lightbulb went off in Som Biswas’ head the first time he learned about ChatGPT. A radiologist at the University of Tennessee Health Science Center, Biswas came across an article about OpenAI’s chatbot on the web when it was released in November 2022. While the world at large was still coming to terms with the seismic implications of the technology, Biswas realized he could use it to make at least one facet of his career a whole lot easier.

“I’m a researcher and I publish articles on a regular basis,” Biswas told The Daily Beast. “Those two things linked up in my brain: If ChatGPT can be used to write stories and jokes, why not use it for research or publication for serious articles?”

He needed a proof of concept, so Biswas had the bot write an article about a topic he was already very familiar with: medical writing. After trial and error, Biswas was able to create an article by prompting ChatGPT section by section. When he finished, he submitted the paper to Radiology, a monthly peer-reviewed journal from the Radiological Society of North America. “At the end, I told the editor, ‘All that you read was written by AI,’ so that sort of impressed them a lot,” he said.

A few days later, the paper, “ChatGPT and the Future of Medical Writing,” was published in Radiology after undergoing peer-review, according to Biswas. After it was up, he felt he was on to something. ChatGPT could be used for more than just fooling around with creative projects. He could actually use it to help his career and research.

What Biswas is doing isn’t necessarily unique. Since the release of ChatGPT, academics and researchers just like Biswas have been using large language models (LLMs) as a tool to help them with their own writing and research process—and occasionally, generating papers out of whole cloth using the bots. While they’ve been helpful in this way, they’ve also created a sea change in the scientific community that has many experts worried about the erosion of credibility in academic publishing.

Since his first article, Biswas has used OpenAI’s chatbot to write at least 16 papers in four months, and published five articles in four different journals. The latest was published as a commentary in the journal Pediatric Radiology on April 28. In it, Biswas is listed as the sole author, with an acknowledgement at the end that ChatGPT wrote the article and he edited it.

However, by Biswas’ own admission, the papers he generates aren’t limited to topics within his radiology expertise. In fact, he’s used the bot to write papers on the role of ChatGPT in the military, education, agriculture, social media, insurance, law, and microbiology. He’s successfully had these published in journals specializing in different niche disciplines, including a paper on computer programming in the Mesopotamian Journal of Computer Science, and two letters to the editor on global warming and public health in the Annals of Biomedical Engineering.

A year ago, this type of output might have seemed completely unrealistic. Papers take dozens if not hundreds of hours of research before even putting word to the page. Researchers in the sciences might publish a few papers per year at the most. And it’s quite rare for someone to dive into writing on a topic outside of their life’s work.

This Tiny Town Created by ChatGPT Is Better Than Reality TV

However, those using ChatGPT to produce papers have already beaten their output in previous years by orders of magnitude—Biswas being one of them. And his motivation goes beyond just seeing his byline. As he told The Daily Beast, he wants to be an evangelist for a piece of emerging technology that he believes is going to change the way all researchers do their work forever.

“Health care is going to change. Writing is going to change. Research is going to change,” Biswas said. “I’m just trying to publish now and show it so people can know about it and explore more.”

‘People Are Getting Silly.’

The release of ChatGPT initiated a groundswell of concern about how the LLM would upend industries and practices like copywriting, journalism, student essays, and even comedy writing. The world of academia and scientific literature also prepared itself for a coming upheaval—one that is arguably much more drastic than previously anticipated.

“There’s been a really dramatic uptick in the different articles that we've been getting,” Stefan Duma, a professor of engineering at Virginia Tech, told The Daily Beast. Duma is the editor-in-chief of the Annals of Biomedical Engineering. In the past few months, he said he has seen an exponential increase in the number of different papers submitted for publication in his journal—including the two he published from Biswas in the letters to the editor section.

“The number of [letters to the editor] submissions went from practically zero to probably two or three a week now—so maybe a dozen a month,” he said. “This is astronomically large, because we usually might only get one or two letters about anything per month. Now we get more than 10 just about ChatGPT which is a big increase.”

We’re Not Ready for the AI Boom. It’s Coming Anyway.

Letters to the editor, explained Duma, are basically a journal’s opinion section. There are fewer restrictions on the kind of writing and depth of research needed to publish pieces here. That’s why Duma was willing to publish Biswas articles on global warming and public health in the section.

However, he added that he’s been rejecting a lot more articles generated by ChatGPT and other LLMs due to their low quality.

“People are getting silly with them,” he said. “People will send me 10 of the same letter with one word changed. We try to make sure that there’s some uniqueness about some of these things. But it’s not a full peer review. People are free to kind of write whatever they want in these letters to the editor. So we have rejected some if it doesn’t add anything novel at all, and it’s just sort of repetitive.”

(Mesopotamian Journal of Computer Science and Radiology did not respond to requests for comment from The Daily Beast.)

Journal editors like Duma’s aren’t the only ones who have noticed the impact that ChatGPT has had on the academic world. The AI boom has created an entirely new landscape for researchers to navigate—and it’s only becoming harder as these tools proliferate and become more sophisticated.

Elisabeth Bik, microbiologist and science integrity expert, told The Daily Beast that she’s in two minds about the use of LLMs in academia. On the one hand, she acknowledged that it could become an invaluable tool for researchers whose first language isn’t English who could use it to construct coherent sentences and paragraphs.

On the other hand, she has also been following the uptick of researchers who have been plainly abusing the chatbot to churn out dozens of articles in the past few months alone. She claimed that many of these “authors” have also not been acknowledging the fact that they used ChatGPT or other models to help generate the articles either.

“At least [Biswas] is acknowledging that he’s using ChatGPT, so you have to give him some credit,” Bik said. “There’s a bunch of others I’ve already come across who also have published enormous and unbelievable amounts of papers while also not acknowledging ChatGPT. These people just published way too much. Like, that’s just not realistically possible.”

The reason, Bik explained, is simple: “Citations and number of publications are two of the measures where academics are measured.” The more you have, the more legitimate and experienced you might seem in the eyes of academic institutions and scientific organizations. “So if you find an artificial way to crank up these things, it feels like it’s unfair because now he’s going to win all the performance measures.”

ChatGPT May Be Able to Convince You Killing a Person Is OK

The increased use of ChatGPT is also a bleak reflection on the expectations put on researchers in the academic world. “Given the truly crushing pressure to publish, I think academics are going to start relying on ChatGPT to automate some of the more boring parts of writing,” Brett Karlan, a postdoctoral fellow in AI ethics at Stanford University, told The Daily Beast in an email. “And it would be very likely that the same people who churn out barely publishable papers and send them off to predatory journals are going to figure out workflows that automate this with ChatGPT.”

Bik is also concerned that the proliferation of LLMs will only bolster so-called paper mills, a term in research that describes black market organizations that undermine traditional academic research by producing fraudulent scientific papers that resemble genuine research and selling authorship on legitimate studies. Scholarly papers produced by paper mills are often heavily plagiarized and reuse data and assets. “You can imagine a person who is a good prompt writer who can just crank out one paper a minute, and then sell papers to authors who need them,” Bik said.

So while it could provide a very useful tool to some academics like Biswas hopes, ChatGPT and other LLMs create a sort of perfect storm of ease and efficiency that could allow bad actors to take advantage of an academic publishing industry that is, so far, unprepared to meet these challenges.

An Academic Game Changer

The issues facing academia and research publishing today are the exact same ones that numerous industries like media and journalism must contend with when it comes to these advanced chatbots: the erosion of credibility and the potential for harm.

LLMs and AI more broadly have a long and sordid history with bias, which has resulted in numerous reported instances of harm via racism and sexism. Chatbots like ChatGPT are no exception. For example, in the first few days of its release, users were reporting instances in which OpenAI’s LLM was doing things like telling users that only white males make good scientists and that a child’s life shouldn’t be saved if they were an African American boy.

Bias has become a perennial problem with AI. Even as we see the technology become more and more sophisticated, biases seem to always remain. These bots are trained off of massive datasets derived from humans—biased, racist, sexist, misogynistic humans—that can show up in the final product no matter how many filters and guardrails AI developers attempt to put in place.

Academic journals are attempting to keep up with the breakneck pace of these emerging technologies that seem to evolve and grow more powerful by the minute. Duma told The Daily Beast that his journal Annals of Biomedical Engineering had recently enacted a new policy to forbid LLMs to be listed as co-authors, and not allow such papers to be published as regular research articles.

“Authorship is very serious and it’s something that we take very seriously,” Duma said. “So anytime we have a paper, the authors have to sign that they’ve contributed substantially to the paper. That’s something ChatGPT can’t be a part of. ChatGPT cannot be an author.”

Google’s AI Chatbot Bard Spews Misinformation and Hate, Researchers Find

However, he acknowledged that these tools are here to stay. To say otherwise wouldn’t just be ignorant—it might even potentially be dangerous as it wouldn’t allow the industry to adapt accordingly. “I think people need to put their seatbelt on and get ready for it,” Duma said. “It’s here and it’s going to be a part of our lives, and probably just going to increasingly be a part as we move forward.”

Meanwhile, Biswas plans to continue using ChatGPT to help his writing process. He’s especially excited about the release of the latest version of ChatGPT and its new features, particularly its multimodal capabilities. This is the model’s ability to understand images as well as text inputs—something that he said is going to represent another turning point in the relationship between AI and researchers.

“Image to text is a game changer especially for radiology because images are what we do,” said Biswas. “If that’s going to help us, then I think I’m going to publish some more articles that explore it—because if I don’t do it, someone else will.”

Read more at The Daily Beast.

Got a tip? Send it to The Daily Beast here

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.