It is easy for me to spot most image duplications: Elisabeth Bik

Bik

Elisabeth Bik has a remarkable natural ability to identify both image duplication and image manipulation.

Dr. Elisabeth Bik has a remarkable natural ability to identify both image duplication and image manipulation through visual screening. She along with others recently found questionable image duplications in 10% of papers in a single journal, and 3.8% of papers in 40 journals in 2016. A few days ago she found an alarming 27 instances of image duplication among 171 papers (15.8%) scanned. And in a series of tweets she shared questionable images to raise awareness about image duplications and help train people to spot them. In response to my questions, she dives deep into the menace of image duplication in the scientific literature.

In pre-print findings reported in bioRxiv Dr. Elisabeth M. Bik, first author and Scientific Editorial Director , was able to spot “inappropriately duplicated images” in 59 (6.1%) of 960 papers published in the journal Molecular and Cellular Biology between 2009 and 2016. While image duplication in 42 papers were honest mistakes arising during figure preparation and were corrected, five papers (nearly 10%) were retracted as they either had manipulated images or too many problems that could not be fixed with a simple correction. In the case of 12 papers no action was taken due to several reasons.

In another paper published in June 2016 in the journal mBio, Dr. Bik and others looked for duplication of Western blot images alone from over 20,500 papers published in 40 different journals from 1995 to 2014. As many as 782 (3.8%) papers contained at least one figure containing duplicated image, and about half of them were suggestive of deliberate manipulation.

“In my free time, I scan scientific literature for problematic images,” she had tweeted. And true to that, on July 8, 2018, she found 27 instances of duplicated images among 171 papers in just one journal. And that’s a high 15.8%, much higher what she had reported earlier. She tweeted many examples of duplicated images and asked her followers to spot them, thus raising awareness and training them to identify problematic images.

You tweeted saying 27 of 171 papers (15.8%) from a single journal had problematic image duplications. Is the percentage high compared with your latest paper in bioRxiv and the 2016 paper?

This is a high percentage — in our 2016 paper, we found rates between 0.30 and 10.42%, with an average rate of around 4%. The journal I was tweeting about was not part of the 2016 paper, and has a higher percentage than other journals I looked at.

Which kind of images contain the most duplications or at least where you have seen most duplications — western blot or microscope images?

Hard to say — western blot are the most common types I see in papers with photographic images, and those are also the ones I seem to find the most, but I am not sure if the percentages are different. I have not measured that, but I would say they have about the same rate.

With respect to microscopic images, which is the most common type of duplication you see?

Simple duplications (same magnification, same orientation) are the most common, and they might just be simple errors, caused by selecting the same image twice instead of picking the correct one. After that, overlapping images are very common as well. Rotated or manipulated images (in which the same cells are visible multiple times) are less common, but also harder to spot.

Which kind of duplication is easier for you to spot — western blot or microscopic images? 

I don’t know how many duplications I miss, so I cannot tell which ones I am better at, but I enjoy looking at microscopic images more than comparing western blots.

In a series of tweets recently, you are seen testing the ability of your followers to spot problems in a given set of images. What was the intention behind doing this? Is it to train people to identify problematic images or where you just having some fun while doing this tedious job?

My main purpose was to let people know that these problems exist. Raised awareness of people who perform peer review will help better find these images before publication. Lots of scientists had no idea that these duplications are present in scientific papers, so raising awareness is needed. But I was also having a bit of fun seeing if other people can spot them as well.

How easily do you find duplications in a given set of images, and do some really challenge you?

Most are easy to spot for me, but apparently not for others. All published papers have gone through peer review and editorial handling, and papers I am scanning have been published months or years ago, so there were several opportunities for others to see them.  But I am also not perfect, so there are probably duplications that I did not find.

When you find the authors committing the same kind of duplication in different papers, do you still think these are honest mistakes?

It could suggest either intention-to-mislead, or just a sloppy lab that is not careful about keeping track of experiments in a paper or electronic lab notebook. It could be a graduate student desperate to obtain results, and/or a powerful professor who wants to see results. In any case, those repeat offenders suggest a lab where positive results and quantity are more important than the quality of experiments.

Have you got any positive response from journals where you had spotted such duplicated images, and have they taken any steps to address this problem?

Yes, some journals have been a pleasure to work with. In particular, the ASM journals including Molecular and Cellular Biology, mBio, and Infection and Immunity were great. That is why we now formed a team and published several studies about these problems. All of these journals now include pre-publication editorial screening to catch these problems before papers get published.

How do think your enthusiasm to spot duplicated images is going to change scientific publishing? Will the respective authors become more careful and/or will journals put in place certain measures to spot them before publication?

There is no good software yet on the market to automatically screen images for duplications, but several journals and publishers use human eyes for initial screening, and subsequently software to help confirm irregularities in background noise or similarities using false-colour imaging.

Are you aware of PLOS ONE, where you found the most number of such duplicated images, doing anything to address this issue after your paper was published in 2016? 

They have sent me some sporadic updates. As of now, I have heard of 11 retractions and 36 corrections in the set of 382 papers I reported in 2014 and 2015.  I assume for now they are still working on the remaining cases, but it has been frustrating to experience that progress and updates have been slow.

Advertisement