Forgot your password ?




SICOT e-Newsletter

Issue No. 53 - February 2013

Editorial

The Published Paper – Is it sacred anymore?

"Publish or Perish"

"...nearly 55% of articles published are never cited..."

"If you steal from one author, it is plagiarism; if you steal from many, it is research"

"...peer review cannot be a goal keeper of either scientific quality or integrity."

Shanmuganathan Rajasekaran
SICOT Treasurer - Coimbatore, India

The sacred phrase in medical practice today is ‘Evidence based medicine’. To the scientific community, ‘evidence’ comes in the form of a published paper in a peer reviewed journal or a podium presentation abstract at an international meeting. The practicing surgeon bases his clinical decisions in day to day practice on a published paper. The published paper is also the basis on which academic debates are settled, new trends in practice are accepted and even government policies on health issues are made. It is also the basis on which selections are made to academic positions, promotions are decided and major research awards are granted. In fact, the number of published papers has become the basis on which the scientific worth of an individual or an institution is decided. The phrase 'Publish or Perish' coined by H.J. Coolidge in 19321 has now attained dictatorial proportions as one’s ability to continuously publish has become the main criteria for survival and progress in the academic world. It is then no wonder that there is an avalanche of scientific publications in the last two decades. The year of 2006 saw the crossing of the landmark of 50 million scholarly publications in peer reviewed journals of which approximately 1.3 million was in the year of 2006 alone2. This impressive milestone calls for an introspection and evaluation of the usefulness of all this scientific activity.

Explosion of science

The continuous increase in published papers raises the question if we are witnessing an explosion of scientific knowledge or just wasteful publishing. The acknowledgement of the true worth of a paper can be measured by how often it is cited and its citation index. It appears that most papers are ‘published for publishing sake’ without much scientific worth as nearly 55% of articles published are never cited even once in the first five years of being published3. Of the remaining, between 5 and 25% of all citations are self citations4. Many researchers augment their publishing by dubious practices such as salami slicing where a single research activity is split up into as many different publications as possible. The practice of the same material being published in different journals and being presented in different podiums is still prevalent but can hardly be identified as they are presented with different captions, keywords and co-author variation. Perhaps more than half of publishing is purely for the benefit of padding up the author’s curriculum vitae without being of any consequence to science.

Questionable practices and research fraud

Fraudulent scientific behaviour and falsification of data can range from simple carelessness, unintentional bias, intentional adjusting of data to improve results, plagiarism and outright fraud. Scandals of plagiarism continue to rock the scientific community frequently as many authors believe in the principle ‘If you steal from one author, it is plagiarism; if you steal from many, it is research'5. Adjusting data to suit the results is also unfortunately widely prevalent than commonly thought to be. In a sample of post doctoral fellows at the University of California, San Francisco, 3-4% admitted to have modified data in the past and 17% would be willing to select or omit data to improve their results. In another study, 81% were willing to omit or cheat on data to win a big grant or publish a coveted paper. In another study, 33.7% admitted to different questionable research practices and even serious form of research misconduct6. This shows that the hazy boundary between right and wrong to achieve a desired outcome is being increasingly crossed by many researchers and this is a major cause for concern.

The problem may be much more in the scientific material presented in conferences as the rigour of peer review verification is considerably less here. Such material gets published as scientific abstracts in many journals and indirectly gets the official sanctity of a scientific publication.

Peer-review – A failing process?

The peer review process, on whose shoulders the responsibility for maintaining the standards of publication rests, is unfortunately breaking down in its efforts to maintain scientific integrity. Blinded reviewers cannot obviously police the honesty and good research practices of the authors. The increasing number of retractions of published papers stand testimony to this. In the past decade retractions have increased more than 15 fold and they also include articles in the most coveted journals such as Science and Nature7. The ever increasing number of journals have put the entire process of peer review under challenge as there are now too many journals chasing too few reviewers. Most reviewers do not have the time, the expertise or the training to provide quality reviews leading to poor quality reviews. Studies have exposed poor agreement of quality of the same manuscript between different reviewers8. Often, a manuscript assessed as excellent by one reviewer is rejected for publication by another. Manuscripts rejected by one journal can be published elsewhere as the ever increasing number of journals are running behind materials for publishing9. We now face the situation where the current system of peer review cannot be a goal keeper of either scientific quality or integrity.

Future – Is there a solution?

The present scenario is shaking the very foundation of evidence based medicine. It is time that the elders in the profession, administrators of institutions and international organizations like SICOT take serious note of this trend and look into solutions. The current system of rewards must change so that the focus is on publishing high quality research rather than promoting the number game. Scientists must be evaluated on the basis of their best five papers rather than on the number of publications. This will encourage high quality research and focused publishing. One has to question the need for the ever increasing number of journals which struggle to find adequate submissions and quality reviewers. We must also look at inculcating good research morals on sound methodologies in our younger colleagues and this is best done by elders teaching by example. Developing young investigators to appreciate good research morals as well as sound methodologies and to take pride in good research practices will, of course, provide a lasting solution.

   
References:

  1. Coolidge HJ ed. Archibald Cary Coolidge: Life and Letters. Books for Libraries: United States, 1932, p 308.
  2. Abbott A, Cyranoski D, Jones N, et al. Do metrics matter? Nature 2010;465:860-862.
  3. Bauerlein M, Gad-el-Hak M, Grody W, McKelvey B, Trimble S. We must stop the avalanche of low-quality research. The Chronicle of Higher Education, June 13 2010. http://chronicle.com/article/We-Must-Stop-the-Avalanche-of/65890/ (date last accessed 20 June 2012).
  4. Thomson Reuters. Journal self-citation in the journal citation reports. http://thomsonreuters.com/products_services/science/free/essays/journal_self_citation_jcr/ (last accessed 25 June 2012)
  5. No authors listed. Wilson Mizner personal quotes. http://www.imdb.com/name/nm0594594/bio#quotes (date last accessed 20 June 2012).
  6. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One;4:e5738-e5738.
  7. No authors listed. Retraction watch. http://retractionwatch.wordpress.com/ (date last accessed 15 June 2012).
  8. Wilson JD. Peer review and publication. J Clin Invest 1978;61:1697-1701.
  9. Siegelman SS. Assassins and zealots: variations in peer review: special report. Radiology 1991;178:637-642

        Â