Citation cartels, shadow authors, and fake reviews: Fraud is fueling a crisis in science—here's what we need to do to stop it | Keith Yates

In 2023, 10,000 scientific papers were retracted after being found to be fraudulent. (Image: Deagreez/Getty Images)

Academic dishonesty in science is becoming a serious problem. For the sake of the reputation of our academic institutions and science itself, measures must be taken to curb this dubious practice.

The term “fraud” in an academic context has a different meaning than in everyday usage: it refers to the act of deceiving someone to obtain financial gain illegally. Academic fraud implies that the perpetrator seeks academic recognition through deception, dishonesty, and the provision of false information.

You may like

  • Scientific objectivity is a myth. Here's why.

  • “The first author was a woman. Her place is in the kitchen, not writing papers”: Bias in STEM publications still penalizes women.

  • The fight against climate change faces a new threat: pessimists believe it is too late to act.

But there is also a growing trend toward bibliometric manipulation. This includes practices such as self-citation, citation cartels, and forced citation. These practices are problematic because citations are the currency through which articles in scientific journals and their authors demonstrate their authority. The more other researchers cite your article, the more influential it is. Artificially inflating the number of citations to an author or journal can distort the perception of the “importance” of science in a given field.

Other forms of author and journal misconduct also raise concerns in this area. In sham peer review, authors list the names of colleagues who are supposed to review their articles but provide false contact information. If journal editors fail to verify this information, this essentially allows authors to write their own reviews. The practice of “gift authorship” involves scientists listing friends or colleagues as authors of their articles, even if they were not involved in the work, allowing them to artificially inflate their publication and citation counts.

Keith YatesNavigating social links

Keith Yates is Professor of Mathematical Biology and Public Affairs at the University of Bath, UK.

There are even articles written entirely by some “phantoms,” whose named “authors” have no connection to the article at all, or only a minor one.

In 2023, the number of peer-reviewed articles ultimately published, but then retracted due to fraudulent findings, exceeded 10,000 for the first time. And these articles, whose fraudulent findings were genuine, may represent just the tip of the iceberg. Some authors suggest that up to one in seven scientific articles is fraudulent, although estimates vary.

To some extent, academia has created these problems itself by increasingly relying on metrics to evaluate the performance of a scientist, journal, or institution. Hirsch indices—a measure of the number of articles a scientist has published and how often they are cited (for example, my h-index is 28, meaning I have 28 articles, each of which has been cited at least 28 times)—and even cruder metrics, such as the number of publications or the number of citations, are used as indirect indicators of impact.

The decline of the audience-funded model has meant that article quality is no longer a critical issue for some cynical journals. Even if no one reads them, the money is in the bank.

Hiring and promotion committees often use these metrics as indicators of academic quality, meaning that a scientist's employment prospects and career advancement may be heavily influenced by these numbers.

For journals, the impact factor—a measure of the average number of citations each article published annually—is a similar metric used to compare the quality of publications. This not only enhances the journal's prestige but also attracts higher-quality articles, generating positive feedback.

You may like

  • Scientific objectivity is a myth. Here's why.

  • “The first author was a woman. Her place is in the kitchen, not writing papers”: Bias in STEM publications still penalizes women.

  • Climate action faces a new threat: pessimists who believe it's too late to act.

The problem with these metrics, which combine performance indicators, is that they are easily manipulated by unscrupulous and desperate people. This is a classic example of Goodhart's Law, which states: “When a metric becomes a goal, it ceases to be a good metric.”

These metrics create perverse incentives for scientists to publish as many papers as quickly as possible, using as many self-citations as possible, sacrificing quality and rigor for quantity and speed.

Adding numerous citations to your own articles (regardless of whether they're relevant) and inviting a group of colleagues to do the same in your work is one way to inflate this statistic. It may seem relatively harmless, but filling the references section with irrelevant articles makes the article difficult to navigate, which ultimately reduces the quality of the scientific information presented.

When I recently submitted a paper, one of the reviewers tasked with reviewing it before acceptance asked me to cite a number of completely irrelevant articles. As a senior researcher, I was confident enough to complain to the journal about this reviewer, but my more junior colleagues, for whom this publication could be a decisive factor in getting their next job, might not have felt the need to complain. If the journal is honest, this reviewer should be removed from the list, but some journals are less scrupulous than others.

In recent years, there has been a shift away from the traditional academic publishing model, where journals make money by charging end users for access to their articles, and toward an “open access” publishing model. At first glance, open access democratizes research by allowing the public, which often (albeit indirectly) funds research through government grants, to access it free of charge. This is why research funders often provide universities with funding for “article processing fees” (usually measured in thousands of dollars), which they then pay journals for making their published articles openly accessible.

But this shift to open access has created another perverse incentive. The decline of the audience-funded model has meant that article quality is no longer a critical issue for some cynical journals. Even if no one reads them, the money is in the bank, and citation metrics are automatically taken into account. The incentive for unscrupulous journals and scientists is to publish as many articles as quickly as possible. As a result, the quality and reputation of science inevitably suffer.

Combating scientific fraud

So, what can be done to reverse the growing threat of scientific fraud? A two-part report commissioned by the International Mathematical Union (IMU) and the International Council for Industrial and Applied Mathematics (ICIAM) presents some recommendations for combating this phenomenon.

From the very top, decision-makers, from politicians to funding agencies, should encourage the rejection of falsifiable metrics such as university rankings, journal rankings, impact factors, and h-indexes. In particular, funding decisions should be separated from these metrics.

At the institutional level, research organizations must discourage the use of bibliometrics in promotion and recruitment processes, otherwise they risk incompetent scientists gaming the system and outperforming their more diligent colleagues. Institutions can also vote with their feet by deciding article processing fees, depriving predatory journals of their primary source of funding.

More opinions

“RFK Jr. wants to reform the country's 'vaccine court.' Here's what's standing in his way.”

“When people gather in groups, strange behavior often emerges”: How the rise of online social media has led to dysfunctional thinking

—Colossal's de-extinction campaign is built on a semantic house of cards with a shaky foundation—and the consequences are dire.

A significant part of the problem lies in the simple lack of awareness among scientists and those who work with them. Institutes should more actively inform their researchers and supervisors about cases of academic dishonesty.

Of course, a significant part of the responsibility for reducing academic dishonesty lies with researchers themselves. This means carefully choosing editorial boards, which journals to submit their papers to, and which to peer-review. It also means speaking out openly about instances of predatory behavior, which is easier said than done. Many who speak out against predatory behavior prefer to do so anonymously, fearing reprisals from publishers or even their peers. Therefore, we must also foster a culture in which whistleblowers are protected and supported by their institutions.

Ultimately, whether good science will become mired in an ever-growing morass of poor-quality research or whether we can turn the tide depends on the integrity of researchers and the awareness of the organizations that promote and fund it.

“Live Science Opinion” gives you insight into the most important science issues affecting you and the world around you today, written by experts and leading scientists in their fields.

Keith Yates, ABSW Media

Keith Yates is a media intern for the British Science Writers Association at Live Science. His primary job is as a professor of mathematical biology and public affairs at the University of Bath (UK). He covers mathematics and health. His work has appeared in publications such as The Guardian, The Independent, New Statesman, BBC Futures, Scientific American, and others. His science journalism has won awards from the Royal Statistical Society and The Conversation. Keith holds a BA in mathematics, an MSc in mathematical modeling, and a PhD in systems biology from the University of Oxford. He has written two popular science books: The Mathematics of Life and Death and How to Expect the Unpredictable.

You must verify your public display name before commenting.

Please log out and log back in. You will then be asked to enter a display name.

Exit Read more

Scientific objectivity is a myth. Here's why.

“The first author was a woman. Her place is in the kitchen, not writing papers”: Bias in STEM publications still penalizes women.

Climate action faces a new threat: pessimists who believe it's too late to act.

A paper on the “life of arsenic” has been retracted by Science magazine 15 years after its publication.

Why OpenAI's Anti-AI Hallucination Solution Will Kill ChatGPT Tomorrow

“We have effectively destroyed our entire pandemic response capacity,” says leading epidemiologist Michael Osterholm.
Latest news on human behavior

Live Science Crossword #12: Heart of the Atom – 6 points down

Live Science Crossword #11: Giant Cloud at the Edge of the Solar System – 7 Horizontal

“When people gather in groups, strange behavior often emerges”: How the rise of social media has led to the emergence of dysfunctional thinking

Live Science Crossword #10: Tallest Volcano on Earth – 10 points down

Live Science Crossword #9: Hooded kingsnake — 14 horizontally

Scientific objectivity is a myth. Here's why.
Latest opinions

Ancient hobbits' growth slowed during childhood, showing that humans didn't always “get bigger and bigger in the brain.”

Cartels, bogus publications, and bogus peer reviews: Fraud is driving a crisis in science—here's what we need to do to stop it.

Why OpenAI's Anti-AI Hallucination Solution Will Kill ChatGPT Tomorrow

Fossils of a giant penguin that lived 3 million years ago have been discovered in New Zealand. What happened to them?

We are only just beginning to learn what the Earth's inner core is really made of.

Experts say the danger of declining birth rates in the US is “greatly exaggerated.”
LATEST ARTICLES

  • 1The study found that Iran is among the “world's most extreme subsidence hotspots,” with some areas sinking by up to 30 cm per year.

  • 2The ancient Egyptian statue of “Messi”, found in the necropolis of Saqqara, is “the only known example of its kind from the Old Kingdom.”
  • 3Ancient hobbits had a slow growth rate during childhood, which shows that people did not always have “bigger and bigger brains.”
  • Hurricane Fujiwara's rare 'dance' could save the East Coast from the worst of Tropical Storm Imelda.
  • 5 Eagle Brooches: 1,500-year-old pins adorned with dazzling gemstones and glass worn by influential Visigoth women.
  • Live Science magazine is part of Future US Inc., an international media group and leading digital publisher. Visit our corporate website.

    • About Us
    • Contact Future experts
    • Terms and Conditions
    • Privacy Policy
    • Cookie Policy
    • Accessibility Statement
    • Advertise with us
    • Web notifications
    • Career
    • Editorial standards
    • How to present history to us

    © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

    var dfp_config = { “site_platform”: “vanilla”, “keywords”: “type_opinion,van-disable-inbody-ads,serversidehawk,van-enable-adviser-

    Sourse: www.livescience.com

    Leave a Reply

    Your email address will not be published. Required fields are marked *