Retraction Watch has been a powerful partner in the examination of falsification in the published record, doing the hard work of gathering retraction notices and categorizing them, generating data from these retractions, highlighting research about retractions, and collecting wide-ranging comments, all in one readily available blog.
At Science Image Integrity our focus is on data-images rather than retractions—on what retractions can tell us about the extent and types of problems with data-images. Earlier this year we looked at a small sample of 2010 and 2011 retractions in PubMed and found that the language in these retractions was inconsistent and often vague, even obscure; some gave no information other than the statement of retraction. (In September Ferric Fang PhD and colleagues at University of Washington examined this problem in detail in their 2012 article “Misconduct accounts for the majority of retracted scientific publications“, confirming our impressions.) Because of a lack of information in many retractions, and lack of clarity in many others, it is not possible to know what proportion of retractions involve problems with data-images.
In addition, retractions involving data-images need a consistent language for describing different kinds of falsification. Earlier this year we published a preliminary schema for kinds of falsification of data-images, and we’ve since published a revised falsification table. It is inevitable that different institutions—the Office of Research Integrity, universities, research institutes—will use different terminology, but we hope that the community will reach a consensus about how to describe types of falsification.
The comments ** of Ferric Fang, MD, Professor of Laboratory Medicine and Microbiology at University of Washington, apply to both the shortcomings of retractions and the need for common terminology:
Specifically, the classification of data falsification or fabrication, plagiarism and intentional duplicate publication as forms of “author error” is confusing, as most studies have characterized these practices as misconduct, as opposed to error. Similarly, lumping together methodological or analytic errors with data that lacks reproducibility results in a category error because, as you know, we found a number of cases in which data could not be reproduced was cited for what turned out to be suspected or documented fraud.
A Chinese proverb is said to state that “the beginning of wisdom is to call things by their right names.” Thus, to understand retractions and to address their underlying causes, it is important not to limit our understanding to the incomplete and sometimes misleading information provided in retraction notices.”
** Subscription required