When rewards prove counterproductive

Published in The Hindu on September 7, 2006

Hwang
A researcher measuring the value of science in terms of dollars might be more tempted to fabricate data for publication in a journal. — Photo: Flickr.com

Publish or perish. Researchers around the world are only too aware of this unwritten compulsion to get their work published in journals. Incidentally, the dictum does not fully reflect reality. To be more precise, it should be publish in reputed peer-reviewed international journals, such as Nature, Science and the like, or perish.Introduced as a way of measuring researchers’ performance, the compulsion has turned to be one of the most incorrect ways of assessing performance.

Wrong direction

Some countries have now gone one step further, and in the wrong direction, to assess and reward researchers’ performance. According to Nature.com, South Korea would be rewarding researchers $3,000 for publishing papers in ‘elite’ journals.

The reward is restricted to the first author and the corresponding author only.It was just last year that South Korea was in the news for all the wrong reasons; Hwang Woo Suk, a stem cell researcher with Seoul National University, was found to have fabricated data for the two ‘landmark’ papers he got published in Science.

One of the reasons cited even then for Hwang taking the wrong path was the compulsion to excel and be far ahead of others, particularly those from the developed countries. South Korea has been trying all strategies to promote science and be the front-runner in many areas, particularly in biological sciences.

If rewarding researchers with appointments, more research grants, promotions and other incentives in relation to the number of papers published, particularly in reputed international journals, does not always reflect the quality of research work, rewarding them with money, as South Korea, China and Pakistan have resorted to makes it only worse.

But will providing a cash incentive not spur high quality research and get more people to take their work seriously?Not always; A ‘cash per paper published’ strategy to spur good science will most of the time prove counterproductive.First, the pressure to publish in certain journals can be one more reason for researchers resorting to unethical and totally condemnable practices.

“A researcher measuring science in terms of dollars might be more tempted to plagiarise or fabricate data,” noted an editorial in Nature. And it becomes all the more disturbing as data fabrication or publishing papers produced by fraudulent means is ever increasing. Sadly, journals are finding themselves too ill equipped to detect such ‘research’ work.”… Even unusually rigorous peer review of the kind we undertook in this case [Hwang’s] may fail to detect cases of well-constructed fraud,” Donald Kennedy, Editor-in-Chief of Science had written in the journal following the Hwang episode.

The temptation to undertake fraudulent research is just one of the ills of such a reward system though. The battle to decide who should be the first author and the corresponding author is going to become fiercer. Very often, the junior researcher who does the major part of the research will be short changed. And where publication in select journals is the key, the kind of papers and research areas that are preferred by these journals will dictate and decide the nature of work that a researcher has to undertake. That, in short, will be a great disservice to science.

Impact factor

The importance of a journal is measured by its ‘impact factor.’ Impact factor, which is a grading system used to arrive at a number to indicate which journals are better, is fraught with inadequacies and is not a true reflection of a paper’s importance. Impact factor is arrived at by totalling the number of citations (references to an article published in that journal by other articles), in say, the year 2005, to all articles published in the previous two years (2003 and 2004).

These citations could be in papers published either in the same journal or in other journals. The number of citations is then divided by the number of articles published in those two years to arrive at a number called impact factor. And to make things worse, the citations that appear in editorials, news and views, letters, to name a few, are included in the numerator but not included in the denominator.

Simply put, all citations that appear in an issue of a journal are added up but divided only by research articles, notes and reviews. So does the rule of thumb that higher the impact factor of a journal, the better should be the paper published in that journal, be right? Ironically, it is true only in some cases.An editorial published in Nature last year (June) revealed why this is so. “… 89 per cent of last year’s [2004] figure was generated by just 25 per cent of our papers. … “the great majority of our papers received fewer than 20 citations,” revealing how misleading impact factors could be in judging the importance of papers published in high impact journals.

As if these figures do not speak for themselves, the editorial stressed further saying, “these figures all reflect just how strongly the impact factor is influenced by a small minority of papers,” and “impact factors don’t tell us as much as some people think about the respective quality of the science that journals are publishing.”

Impact factor should be used to measure journals’ standing and not the other way around. That is one more reason why South Korea and other countries that have decided to reward researchers based on papers published in high impact factor journals have done a great disservice to science.

Chances of rejection

With a journal’s standing determined by its impact factor, journals very often tend to reject papers that are less likely to attract many citations, noted an article published in The Chronicle (October 2005). According to The Chronicle, Fiona Godlee, editor of the British Medical Journal (BMJ) agreed that impact factors were taken into account while accepting an article.

“It would be hard to imagine that editors don’t do that. That’s part of the way that impact factors are subverting the scientific process,” she was quoted as saying. With tremendous pressure to keep the impact factor high, journals vie with one another to get ‘great’ papers published in their journals. Now, could that have been one of the reasons why Science failed to detect data fabrication by Hwang?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.