Fraud, bias, iffy authorship ‘non-negligible practices’ in nuke med research

Nuclear radiologists are overall confident in the scientific soundness of studies published within their field. Those working in Asia are especially trusting.

However, top peer-reviewed journals serving the subspecialty evidence nontrivial rates of scientific fraud, publication bias and honorary authorship.

This is the finding of researchers in the Netherlands who surveyed almost 2,000 corresponding authors of studies published in any of 15 general nuclear medicine journals in 2021.

Thomas Christian Kwee, MD, PhD, of Groningen University Medical Center and colleagues had their work published in the September edition of the Journal of Nuclear Medicine [1].

Of 254 recipients who completed the survey (12.4% of the field under review), some 54 researchers (21%) indicated they’d witnessed or suspected scientific fraud inside their department over the past five years.

A small but not inconsiderable slice, 11 corresponding authors (4.3%), owned up, albeit anonymously, to having acted dishonestly themselves.

In the U.S., the HHS’s Office of Research Integrity defines scientific fraud, or “research misconduct,” as “fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting research results.”

 

When Journal Heads Accept Positive Findings, Reject Neutral or Negative

An overwhelming majority of survey respondents—87.4% (222 of the 254)—witnessed, suspected or benefited by publication bias.  

This is the tendency of journal editors to selectively publish study manuscripts that have statistically significant positive results—especially those with good potential for getting cited in future studies—while spurning studies that have negative, neutral or insignificant findings (and, thus, unexciting conclusions, takeaway points and “impact factor” juice).

Further, Kwee and co-authors report almost 40% of respondents said they’d experienced honorary authorship practices. These are typically instances in which department leaders are named as co-authors even when they contributed little or nothing to the study at hand.

 

Perverse Incentives Reward Bad Choices

Asked to assign a score to their overall confidence in the integrity of published science, 1 for lowest and 10 for highest, respondents brought back a median score of 8 (range: 2 to 10), Kwee and colleagues report.

On multivariate analysis, the team found, researchers in Asia had significantly more confidence in the integrity of published work, with a beta coefficient of 0.983.

Also, a subset of 22 respondents raised additional concerns beyond fraud, bias and iffy authorship. The write-ins included authorship criteria and assignments, the generally poor quality of published studies, and perverse incentives of journals and publishers.

The latter have been defined by the economics-of-science authority Paula Stephan, PhD, as incentives that “encourage people to make one decision instead of another for monetary reasons” only to end up rewarding “bad financial choices, such as [needlessly] expanding labs and hiring too many temporary scientists.”

In the present study, Kwee et al. conclude that scientific fraud, publication bias and honorary authorship “appear to be non-negligible practices in nuclear medicine.”

Abstract here, full study behind paywall.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup