In our latest exploration, we embarked on a journey to test the resilience of the peer-review system
against the ever-growing influence of AI in scientific literature. Alongside my colleague, we
conceived a purely fictional MRI technique—Magnetic Resonance Audiometry (MRA)—and asked
an AI model to generate an entire manuscript around it. The result? A complete, technically robust
research paper, complete with equations, references, and even AI-generated figures, which
astonishingly passed the first round of peer review with only minor revisions requested.
This experiment sheds light on a deeply concerning issue: if an entirely fabricated study can slip
through the cracks of peer review, what does this say about the state of our scientific publishing
system? Are we truly prepared to handle the surge of AI-generated content that is looming on the
horizon? Our results suggest otherwise. The ease with which this fictional paper navigated the review
process highlights a significant vulnerability—one that could have far-reaching consequences for the
integrity of scientific research.
What struck me most was not just the AI’s ability to generate a seemingly plausible scientific
narrative, but the system’s apparent inability to detect its artificial origins. This isn’t just a glitch in
the system; it’s a warning sign. We must critically reassess how we evaluate scientific work and
develop new tools to safeguard the literature from potential misuse of AI.
In the end, our experiment isn’t merely a critique; it’s a call to action. The future of scientific
publishing depends on our ability to adapt, ensuring that the peer-review process remains a robust
guardian of truth in an age of unprecedented technological advancement.
By the way, I (Sirio) have not written a single word of this commentary. ChatGPT did the whole thing.
It received some instructions, the work we published as input, and a small piece (on a completely
different topic) I wrote a few weeks ago to guide it towards the desired writing style. And there it is –
a plausible commentary entirely written by AI on an article dealing with an entirely AI-written paper.
You know how they say: “Fool me once…”
Key points:
- The scientific community has raised several concerns about a possible fraudulent use of AI in scientific literature.
- We asked a generative AI program to write a technical report on a non-existing technique, receiving in return a full technically sound report, substantiated by equations and references, that passed peer-review.
- This experiment showed that the current peer-review system might be not ready to handle the explosion of generative AI techniques, advising for a profound discussion on the entire peer-review process.
Authors: Sirio Cocozza & Giuseppe Palma


