Impact Factor of Published Clinical Trials in the Field of Pediatric Infectious Diseases
1Department of Pediatrics, Hillel Yaffe Medical Center, Hadera, Israel
2Faculty of Medicine, Technion – Israel Institute of Technology, Haifa, Israel
3Dana Duek Children’s Hospital, Tel-Aviv Medical Center, Tel-Aviv, Israel
4Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
Impact Factor (IF) is used for evaluating journals; it represents a measure of the average frequency with which an article in a specific journal is cited by other articles in a given period of time. In a previous report we showed that in Neonatology, clinical trials published between 1998 and 2003 were more likely to be published in journals with lower IF had they reported negative results (NR) vs. positive results (PR). This study aimed at determining which biases exist in clinical trials in the field of Pediatric Infectious Diseases; we tested the effect of 5 factors on the likelihood of an article to be published in a high vs. low IF journal: NR vs. PR, sample size, study design (prospective, randomized, double-blinded), funding source, and originating region of the report. We selected articles of clinical trials in the field of Pediatric Infectious Diseases registered in MEDLINE from 2007 to 2011. We recorded the aforementioned factors and the IF of each journal, corresponding to publication year. Trends over time and the differences between studies with NR or PR were examined. IF and sample size were not significantly higher in PR vs. NR studies. Conversely, the aforementioned study design elements produced publications in journals of significantly higher IF. IF increased respectively with the following funding source categories: (i) no source stated; (ii) pharmaceutical company; (iii) non-US competitive; (iv) US national agency (non-NIH); (iv) NIH. Pediatric Infectious Diseases articles with NR vs. PR are not more likely to be published in journals with lower IF. Also, no apparent relationship exists between sample size and IF. Factors associated with the quality of the study, namely design and source of funding, may be more related to the IF of the journal than the type of results reported therein.
At a glance: Figures
Keywords: impact factor, negative results, positive results, clinical trials, Pediatric infectious diseases
American Journal of Epidemiology and Infectious Disease, 2014 2 (2),
Received December 01, 2013; Revised March 30, 2014; Accepted March 31, 2014Copyright © 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
- Klein-Kremer, Adi, Francis B. Mimouni, and Ronnie Stein. "Impact Factor of Published Clinical Trials in the Field of Pediatric Infectious Diseases." American Journal of Epidemiology and Infectious Disease 2.2 (2014): 60-62.
- Klein-Kremer, A. , Mimouni, F. B. , & Stein, R. (2014). Impact Factor of Published Clinical Trials in the Field of Pediatric Infectious Diseases. American Journal of Epidemiology and Infectious Disease, 2(2), 60-62.
- Klein-Kremer, Adi, Francis B. Mimouni, and Ronnie Stein. "Impact Factor of Published Clinical Trials in the Field of Pediatric Infectious Diseases." American Journal of Epidemiology and Infectious Disease 2, no. 2 (2014): 60-62.
|Import into BibTeX||Import into EndNote||Import into RefMan||Import into RefWorks|
A Medical Journal Impact Factor (IF) is published by the Journal Citation Report (JCR), a product of Thomson ISI (Institute for Scientific Information) . The JCR provides quantitative tools for evaluating journals, of which the IF is considered reliable and consequential . The IF represents a measure of the average frequency with which an article in a specific journal is cited by other articles in a given period of time; accordingly, a high IF is well-appraised academically. The IF has obvious limitations, and it has been emphasized that it should not be misused as an automatic quality indicator of published manuscripts . Moreover, IF may have an especially considerable influence on medical research publications, since authors might submit an article to a given journal chiefly based on its IF.
In a previous report, we showed that in Neonatology, a specific field of Pediatrics, clinical trials published between 1998 and 2003 were more likely to be published in journals with lower IF had they reported negative results (NR), as compared with those reporting positive results (PR) .
The aim of this study was to determine whether the aforementioned bias also exists in the recent literature reporting clinical trials in the field of Pediatric Infectious Diseases. We hypothesized that articles with higher IFs are associated with (i) PR rather than NR; (ii) larger sample sizes; (iii) prospective, randomized, controlled clinical trials; and (iv) the study being funded as well as the funding source.
PubMed is a service of the National Library of Medicine that provides access to over 12 million Medline citations back to the mid-1960's and additional life science journals. We only studied the publications reported in PubMed. For this purpose, we used the following Internet address: http://www.ncbi.nlm.nih.gov/entrez to evaluate all PubMedarticles registered from January 1st, 2007, to December 31st, 2011. We selected all clinical trials in the field of Pediatric Infectious Diseases. In order to do so, we used the key words "infectious diseases", and limited the search, using PubMed’s own Limits engine, to ‘0-18 years' (all children), humans only, and limited the publications to only those written in English. We repeated the search and correspondingly analyzed its results year by year, according to the total number of clinical trials (CTs) per year, for the 5 years of the specified period (2007-2011). We used PubMed’s own classification of CTs.
We verified that the categorization and tagging offered automatically by PubMed were accurate by examining all retrieved CTs. For each CT, we classified the study as having PR or NR, based upon the presence of a significant difference between groups, while taking into account only the primary outcome (efficacy) and not the secondary outcomes (such as adverse effects). To ensure consistency, only one researcher (RS) reviewed the articles. The IF of each journal was determined for the year of article publication, based upon the Thomson Reuters (ISI) Web of Knowledge.
We recorded the sample size of each study as a potential confounder and determined where the study had been conducted and whether its design was prospective, randomized, or double-blinded. We gave an arbitrary classification of zero as an impact factor when a journal was not included in the citation index. We also recorded, whenever it was stated, the origin of the funding for the study, namely whether it was an NIH fund, another US National Agency, a non-US competitive grant, or a pharmaceutical company.
The Minitab version 16 (Minitab Inc., State College, PA, USA) was used for statistical analyses. The non-parametric spearman ranked correlation was used to study trends over time; the Kruskall–Wallis and chi-square tests were used to study differences between studies with PR and studies with NR, as appropriate. Data are reported as mean ± SD, n (%), or, for non-normally distributed variables, as median (95% confidence interval). P <0.05 was considered statistically significant.
We identified 171 CTs that were published during 2007-2011. Over these 5 years, the yearly number of randomized clinical trials (RCTs) varied from a minimum of 28 to a maximum of 42, with no significant consistent linear increase over the years. There was a statistically significant, yet small, change over the years in the average IF (IF=1.12(Publication Year)-2235; R2 = 0.028, P=0.029; Figure 1).
The primary outcome was identified in all articles. Table 1 depicts the IF and sample size of studies with PR and NR. The IF was not significantly higher in studies reporting PR compared to NR. In the same line, the sample size of the studies was not significantly higher in studies with PR as compared with NR. The post-Hoc power of this analysis was calculated to be 70%.
Table 1. Impact factor (IF) and sample size of studies with positive results (PR) and negative results ( NR)
Table 2 depicts the influence of the study design elements, namely whether the CT was perspective, randomized, and double-blind, on the IF of the journal in which the study was published. All these elements were associated with a significantly higher IF.
Table 3 depicts the median and range of IF depending on the source of funding. IF was highest in studies funded by the NIH, and - by descending order - lower in studies funded by other US national Agency grants, non-US competitive grants, pharmaceutical company grants, and studies where no source of funding was stated (P<0.001).
In this study, we found that contrary to our first hypothesis, there was no apparent difference in the IF of studies with PR and those with NR. This lack of significant difference is not likely to be attributed to a major type 2 error, as this study is based on a relatively large sample size, with a calculated post-hoc power of 70%. We therefore conclude that in the field of Pediatric Infectious Diseases, articles reporting NR are no more likely to be published in journals with lower IF than articles reporting PR; thus, in the past few years, the publication bias observed in our previous study does not exist or is not prominent in this particular field of Pediatrics.
Contrary to our second hypothesis, there was no apparent relationship between the sample size of a given CT and the IF of a journal in which the article was published. This is consistent with the fact that sample size did not differ significantly between PR and NR studies, and that the two types of studies were published in journals of comparable IFs. In contrast, an important finding was that the design of a given study, rather than the type of the reported results, was consequential for its acceptance in a journal with higher IF. Articles with a prospective, randomized, and double-blind design were more likely to be published in a journal of a higher IF. Similarly, the source of funding was influential; on average, NIH-funded studies had a higher IF than those funded by other US National agencies, and of the funded studies, the lowest average IF was found in those funded by (non-competitive) pharmaceutical industry grants. Theoretically, this may be explained in 2 very different fashions; the first possible explanation is that the recommendations and decisions of reviewers and editors are influenced by the source of funding. We speculate that more likely, the source of funding might be correlated with the quality and importance of a given research proposal; if our speculation is correct, the more competitive the origin of the research funding, the higher the quality of the research work.
A limitation of the current study is that its findings only apply to published articles. Whether or not a given study submitted to a journal has the same chances of being published if it reports negative or positive results is a different issue, which cannot be answered by the design of the current study .
We conclude that in the field of Pediatric Infectious Diseases, the presence of PR rather than NR does not hamper the chances of a given study to be published in a high impact factor journal. Rather, it appears that factors associated with the quality of the study, specifically its design or source of funding, may be more influential.
Conflicts of Interest Statement
All authors must disclose no financial and personal relationships with other people or organizations that could inappropriately influence the current work.
|||Introducing the impact factor: http://thomsonreuters.com/journal-citation-reports/ (Last accessed on July 24, 2013).|
|||European Association of Science Editors (EASE) Statement on Inappropriate Use of Impact Factors. http://www.ease.org.uk/publications/impact-factor-statement (Last accessed on July 24, 2013).|
|||Littner Y, Mimouni FB, Dollberg S, et al. Negative results and impact factor: a lesson from neonatology. Arch Pediatr Adolesc Med. 2005; 159: 1036-1037.|
|||Easterbrook, PJ, Berlin JA, Gopalan R, et al. Publication bias in clinical research. Lancet. 1991; 337: 867-872.|