How much should a retraction tell us?

Would the science record be better served if we were able to see falsified data images from retracted papers to better understand the errors?   In late November 2012, the U.S. Office of Research Integrity (ORI) sanctioned University of Kentucky researcher Eric Smart, PhD, after determining that he had falsified or fabricated 45 figures in 10 published articles (plus grant proposals). Most were images of Western blots. Thomsen’s Scientific Web of Knowledge reported that “some of [them] were cited more than 100 times.” How does this affect the science record?

William Sessa of Yale University School of Medicine commented online on an article about the case in The Scientist, saying that he was “shocked at the extent of misconduct” and added “Since I do not know which aspects of the figures were incorrect or misrepresented, it is difficult to assess the impact on the field. “

The ORI findings included details of the data-image fabrication and overall misconduct, providing a better understanding of the error and its impact on both the research and science, but since no data-images were included with the retraction researchers and readers cannot know what was falsified. Without fuller information, it is difficult to know what can or cannot be relied upon in falsified images. Does the lack of transparency of falsified data-images lead to a kind of acceptance of repetitive transgressions? The research community would be better served if falsified data-images in confirmed ORI misconduct cases were made public so that reviewers and researchers could know which images and results were not valid. Making the images public would also demonstrate a commitment to deter such falsifications and manipulations. We welcome your comments.

 

 

 

Sorry, comments are closed for this post.