IJS Short Reports
  • Users Online: 52
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
Year : 2016  |  Volume : 1  |  Issue : 1  |  Page : 1-2

The value of publishing negative studies: Introducing IJS Short Reports

Department of Plastic Surgery, Guy's and St. Thomas' NHS Foundation Trust, London and Balliol College, University of Oxford, Oxford, UK

Date of Web Publication12-Jul-2016

Correspondence Address:
Riaz A Agha
Department of Plastic Surgery, Guy's and St. Thomas' NHS Foundation Trust, London and Balliol College, University of Oxford, Oxford
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/2468-7332.186170

Rights and Permissions

How to cite this article:
Agha RA. The value of publishing negative studies: Introducing IJS Short Reports. IJS Short Rep 2016;1:1-2

How to cite this URL:
Agha RA. The value of publishing negative studies: Introducing IJS Short Reports. IJS Short Rep [serial online] 2016 [cited 2019 Jan 18];1:1-2. Available from: http://www.ijsshortreports.com/text.asp?2016/1/1/1/186170

An estimated US$240 billion is spent annually on heath research. [1] Given such investment, it is important to get it right. It is also estimated that 85% of research resources are wasted. [1] In recent times, transparency has become a major part of the discourse in health-care research. The principle of shedding light on problems and exposing them to the sun is a good one - open access, open science, open data, and reporting guidelines have all been born from this. However, publication bias is still a major problem within health-care research. Dwan et al. conducted a systematic review of publication and outcome reporting bias. [2] They found direct empirical evidence for the existence of study publication and outcome reporting bias. There was strong evidence that positive or statistically significant results were more likely to be published.

In surgery, Chapman et al. found that one in five surgical randomized controlled trials are discontinued early and one in three remains unpublished two years after their conclusion. [3] They stated that negative findings were a barrier to publication and appealed for journals to publish them. They concluded by stating that public trust in research will be badly damaged if we continue to mothball negative studies and recruitment will become an even greater challenge. [3]

Ross et al. showed that only 42% of trials that stated they concluded in 2005 were actually published over two years later. [4] They concluded by stating that "The scientific community should be prioritizing the timely and accurate publication and dissemination of clinical trial results, regardless of the strength and direction of the trial results." This is a challenge to the human behavior clinicians and researchers exhibit when editors express their bias through their decisions, favoring positive studies. Indeed, while negative studies do not get published positive studies sometimes get published twice. [5]

The other aspect we must acknowledge is the lessons lost and the learning that was not disseminated to the wider research community from not publishing negative studies. We can learn from these valuable experiences, and they should be added to the ever expanding corpora of knowledge in our field. This may help to explain why reproducibility is poor within science in general. Prinz et al. were able to replicate the published results in just a quarter of 67 seminal scientific studies. [6] It is no wonder that some have concluded that "most published research findings are probably false." [7]

The surgical community needs to know what works and what doesn't in order to provide better patient care, drive research in the correct direction, aid collaboration, and prevent duplication and wasted resources. Other issues our community grapples with are underpowered studies, poor statistical methods, poor reproducibility and external validity, and poor methodology and reporting of studies. Publishing negative results bring the focus away from the results themselves to the research questions, the hypothesis and the robustness of the methodology used to investigate it. Such studies are all too often rejected by journals due to the direction of their results, rather than the quality of the methodology and the data and the contextual significance of the research questions they answer.

IJS Short Reports is the first peer-reviewed, international, open access journal seeking to publish "negative" studies across the full breadth of the surgical field. We consider all forms of original research from case series to trials as well as nonconfirmatory, unexpected, controversial, and provocative results. We do not usually consider surgical case reports and these should be sent to our sister journal IJS Case Reports (www.casereports.com). We aim to provide rapid submission to decision times while maintaining a high-quality peer-review process that focuses on the quality of the article and the transparency of the reporting rather than the magnitude and direction of the results.

  References Top

Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, et al. Biomedical research: Increasing value, reducing waste. Lancet 2014;383:101-4.  Back to cited text no. 1
Dwan K, Gamble C, Williamson PR, Kirkham JJ; Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - An updated review. PLoS One 2013;8:e66844.  Back to cited text no. 2
Chapman SJ, Shelton B, Mahmood H, Fitzgerald JE, Harrison EM, Bhangu A. Discontinuation and non-publication of surgical randomised controlled trials: Observational study. BMJ 2014;349:g6870.  Back to cited text no. 3
Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: A cross-sectional analysis. PLoS Med 2009;6:e1000144.  Back to cited text no. 4
Tramèr MR, Reynolds DJ, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: A case study. BMJ 1997;315:635-40.  Back to cited text no. 5
Prinz F, Schlange T, Asadullah K. Believe it or not: How much can we rely on published data on potential drug targets? Nat Rev Drug Discov 2011;10:712.  Back to cited text no. 6
Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.  Back to cited text no. 7


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article

 Article Access Statistics
    PDF Downloaded151    
    Comments [Add]    

Recommend this journal