Scrutinising images for tell-tale signs of manipulation is no longer a hobby for Dr. Elisabeth Bik; it has become a passion. The passion has become so overpowering that she has taken a “year off from paid work to focus more on science misconduct volunteer work”. She has voluminous papers to report on and a mountain of them to follow up. “The only way I felt I can catch up on that is to quit my paid job,” she tweeted.
With no software available to detect image duplication and manipulation and journals yet to make pre-publication screening a must, it is volunteers like Dr. Bik who are helping to cleanse the scientific literature of papers with duplicated and manipulated images. Unlike most other volunteers on PubPeer website who go anonymous, she uses her name while posting questionable images. Her efforts have already led to many corrections and retractions but not to the extent that she is happy about. But that has not stopped her from uncovering the true nature of papers with images. Here she provides insight into many issues surrounding papers with duplicated and manipulated images.
Generally, how responsive are journals in retracting or correcting papers for image duplication and/or manipulation?
Not very responsive. Together with Arturo Casadevall and Ferric Fang, we looked at a dataset of 20,000 papers, in which we found 782 with image problems. This study was published in mBio in 2016. All 782 problematic papers were reported to the journals in which they had been published. After I sent my report to the Editor-in-Chief, most journals will send a standard reply acknowledging receipt of the concerns, which is usually followed by ….. no response at all, even after several years.
Of the 782 papers that I reported in 2014 and 2015, 44 have been retracted, two have an expression of concern, and 196 have a corrigendum or erratum. The remaining 540 papers have not been corrected, as far as I know. That means that five years after problematic papers have been reported, only one third of them have been corrected or retracted. That number is much too low, in my opinion, and it means that journals are not very willing to take any action.
In some cases, journals did not want to take action because the paper was more than six years old — but one could argue that such papers could still be corrected. As you can tell by the numbers above, most journals have not been willing to retract papers. In several cases, papers with images that appeared to have been manipulated, just got an erratum or were not corrected at all. This has been frustrating.
In your experience, which journals have been most responsive to problematic images? Did you find any publisher very reluctant to retract papers?
The American Society of Microbiology (ASM) journal editors have been very responsive, although most of them would not correct or retract papers that were over six years old. PLOS was not very responsive in the first three years after I reported the cases, but has been slowly catching up. The Spandidos and Hindawi journals that we included both have been issuing corrections or retractions for almost all papers we reported, so they were very motivated to correct the literature.
After articles about problematic papers from India were published, researchers were told to respond appropriately. But in almost all cases, the authors even when they admit mistakes, say the mistakes do not change the conclusion. But many of the retracted papers also contain similar image problems. So can just a correction be sufficient? Or should such papers be retracted?
The decision to retract or simply correct a paper appears to be very random, and perhaps is dependent on the editor of a journal. As you noted, similar duplications might lead to a retraction in one journal, or just an erratum or no action in another journal. There is clearly a need for guidelines as to what types of duplicated images warrant a retraction. Also, even though the authors might state that errors in the papers do not alter the conclusion, editors should not always accept that statement. For example, if more than half of the figures in a paper contain errors, even if unintentional, that clearly affects the conclusions of the paper, no matter what the authors might say. Editors should not always take the answers of the authors for granted.
What should independent Indian researchers do to ensure papers with problematic images are corrected or retracted?
As I said above, there is a need for guidelines to which types of errors could be dealt with by just a correction, and which more serious errors should lead to a retraction. I would love to work together with publishers to develop such guidelines. For example, such guidelines might suggest that manipulation within a photo should always result in a retraction, no matter what excuse the authors might come up with. As long as publishers or editors do not carefully screen papers before publishing, there is still a need for independent image or data experts to screen papers after publication. They are more and more people making use of PubPeer or social media to describe papers of concern, and journals should be more aware of what is being discussed on those platforms.
How much time does it generally take for journals to retract papers once the problems with images are brought to their notice?
Our dataset of 782 papers have all been reported about five years ago. As of today, most of the papers have not been corrected; only a small fraction has been retracted. But there appears to be a trend towards a faster response time; perhaps under the influence of social media discussions or a shift of opinion of the general audience that these cases should be handled faster.
When would you call a mistake as honest and when as deliberate? What tell-tale signs do you say should be used to differentiate honest mistakes and deliberate attempt to cheat?
It is really hard to make that distinction. Almost all authors will claim that their duplicated images, even the one with signs of image manipulations, are the result of honest errors. But we can make some predictions. In our mBio paper, we distinguished three types of image duplications. The first one, a simple duplication, is most likely to be an honest error. The second type, rotated, mirrored, or overlapping images, is less likely to be an honest error. The third type of duplications, manipulation or duplication of features within the same photo, is almost certain intentional.
Will reusing the same image in more than one paper, especially when not clearly stated in the paper and referenced, constitute deliberate duplication with intent to cheat? Will it call for retraction or will correction suffice?
That might depend. I could imagine that in a chaotic lab with sloppy bookkeeping, a photo could be reused by accident. However, if that same image is flipped, altered, used to represent a different experiment, or reused more than once, this suggests poor lab practices at best, or intention-to-mislead at worst. In such cases, all other experiments described in that paper become questionable, and a retraction might be a possible action.
When the same image of a control is reused in the same paper without clearly stating it, will it still be called as an honest mistake?
If it is representing the same experiment then that would usually be OK. It would actually not even be an error. But it is always worth to point it out in the legend.
When the image is reused to describe different experiments or conditions, will it call for retraction? Will providing fresh images suffice? How will the journal editors know if the fresh images supplied are indeed genuine and represent the experiment mentioned in the paper?
It could still be an honest error, if it is the same image used, so not flipped or rotated. That could be fixed with a simple erratum or correction. There is no way of knowing if the fresh image is correct. Any image can be a fake and not represent the experiment that it is supposed to show.
What kind of image mistakes in Western blot would pass for honest mistakes, which ones would need just a correction and which ones would require retraction?
Honest mistakes could be the case of the same panel being used twice to represent a different protein; this might need a correction or erratum. A shifted or rotated panel could be cause of retraction, especially if there are multiple other problems in that paper. A duplicated band or duplicated lane would, in my personal opinion, call for a retraction.
Do you see journals being more willing to correct the papers than retract them?
Yes, the results of our dataset of almost 800 cases shows that correction is applied more than retraction, although two thirds of the cases have not been addressed at all. Just like universities have potential conflicts of interest when it comes to investigating their researchers who are accused of misconduct, journals have conflicts of interest in investigating these cases as well. Correction might be considered a good option by a journal because it saves everyone — both others as well as editors and reviewers — the perceived embarrassment of retraction. It also keeps the paper citable, so the impact factor of the journal will not be affected.
How much do you think the PubPeer website and independent researchers have contributed in exposing papers with questionable images and getting them corrected/retracted?
A lot. PubPeer allows researchers to post anonymously, without recording IP addresses, making it a safe place to comment on papers. Almost all remarks about problematic images on PubPeer appear correct, and there is an active community who will comment if they do not agree. PubPeer also allows to alert the authors, through the corresponding email address, and they are always welcome to reply to the questions or concerns raised.
Are you seeing more researchers becoming active on PubPeer in exposing papers with problematic images?
That is hard to tell. Almost all people comment anonymously, and people might have multiple accounts, so I am not sure. But the site is regularly mentioned on several stories about science misconduct, so we can assume that more and more people are becoming familiar with it.
Does reviewing and identifying problems in images on PubPeer force researchers/journals to correct or retract papers? How much success have you personally achieved through PubPeer?
No, there is no one on PubPeer forcing people to retract or correct papers; that power lies with journal editors or university officials. But social media outcry surely puts more pressure on journals and universities to correct a paper. I am not sure if posting on PubPeer has helped the papers I found problems in to be retracted or corrected more quickly, but Paul Brookes has found that public posts on problematic papers increases corrective actions.
Are you working with any software developers or others to develop a system to automatically identify duplicated and manipulated images?
I have shared a subset of the papers included in our mBio study, together with a publishing-month and journal-matched dataset of “clean” papers with several academic groups about two years ago, as part of the DARPA Media Forensics Challenge. So far, I have not heard back from most of them. It seems that finding these duplications, which are often not pixel-level perfect, is harder than most people would assume. But I am confident that this is only a matter of time.
Which country has a robust system to handle research integrity issues? Should India have one large office to handle research integrity issues or should it be at the level of institutions or both?
No country, as far as I know. I am most familiar with the system in the U.S. In the U.S., universities and other research institutions usually have appointed a research integrity officer who will educate research staff about integrity issues, and to whom allegations of misconduct can be directed. Although this is great, often universities have large conflicts of interest to investigate misconduct cases. Such cases might be swept under the table, in particular if the accused researcher brings in large amounts of grant money, or has an appointment at a key position. The U.S. also has the Office of Research Integrity, which oversees federal research and can help institutes with integrity cases. However, the ORI has been criticised for not acting often or tough enough. Still, a state-wide or country-wide office sounds like a better option than just relying on institutions to handle their own cases.