Literature Review and Statistics

46 Literature Review and Statistics


The medical literature industry has exponentially grown in the internet era. It is common for inboxes to be flooded by emails purporting ground-breaking data from the latest study accompanied by an expert commentary or two. Authors, even early on in their career, are regularly invited to contribute to ‘open access’ journals with promises of wide exposure for their work on a global platform, contributing to the increasing volume of literature. Thus, there is a clear need for clinicians and researchers, irrespective of their stage in the career pathway, to acquire the necessary skills to be able to assess the relevance, impact and application of a paper to their practice. The journal the article is published in, its origin and provenance should not be an indication of its trustworthiness and relevance; instead, readers should approach the paper as they would any other using critical appraisal skills and tools, arriving at their own judgements. Unless a systematic approach is used to evaluate a research paper, the reader can be lulled into a false sense of security by the headline results. This chapter will not only provide such a framework, but also offer a pragmatic view as many questions in the surgical literature, especially in a small surgical specialty such as ours, will be unable to accrue the highest quality of evidence to guide practice in all areas.


46.1 Peer Review


Almost all journals nowadays use an online platform for article submission and peer review. Once an article has been submitted to a journal, it is subjected to a quality check to ensure that it meets the journal style. Based on the editorial board and journal policies, a quick assessment of the content is made to ensure that the topic will be of interest to the readership and is appropriate to the journal, usually by a senior editor, after which the paper is sent out for peer review. The peer review process requires reviewers to critically evaluate an article that is submitted to a journal and make recommendations to the editor on the scientific merit and suitability of the paper for publication. A critical review takes significant time and effort, especially when starting out, but is a very useful skill to acquire. Unfortunately, many reviewers, often chosen for their prominence in a specialty, are unable nor have the time to do this properly. Journals with higher-impact factors usually tend to have a more robust peer review process.


imageTable 46.1 is a list of journals ranked by impact factors, one of the metrics used for journal citation, as per the Web of Science Core Collection maintained by Clarivate Analytics (formerly the intellectual property of Thomson Reuters), which ranks 12,090 publications across 256 disciplines in science, social science, arts and humanities. The list enumerates the top five journals in the list to give the reader a perspective of the specialties residing at the top, followed by some prominent otolaryngology journals. Different biomedical specialties have different citation patterns and what is high-impact factor in one field is not necessarily high in another. To help place this list in context, here are some facts: Otorhinolaryngology is ranked 170 among 234 journal categories, ranging from Acoustics through Linguistics to Zoology. The aggregate impact factor of Otolaryngology journals is 1.768; the categories at the top of this have broader scope, and include entities like nanosciences, cell sciences and chemistry with aggregate impact factor of greater than 5.5. Within the medical specialties, critical care medicine tops the categories with an aggregate impact factor of 4.483.


46.2 Literature Review


An important component of literature evaluation is the skill to carry out a literature review. To enable a good evaluation, the study should be assessed alongside existing research work to determine its importance. Searching for relevant literature is a skill that is best learnt by attending dedicated courses and requires a period of study that will teach the technical terms, guidelines and methodology needed to perform a comprehensive search.


46.3 Critical Appraisal


Critical appraisal (CA) of a study can be boiled down to one question ‘Are the methodology and results robust enough to persuade me to adopt the study recommendations to my patient population?’ The primary focus of a CA should be on the adequacy of study methods. An inadequate description of the methodology will not allow a comprehensive appraisal of the study’s relevance, reliability and applicability.


image Table 46.1 Impact factor for selected journals in 2016 based on the Web of Science, maintained by Clarivate Analytics





























































































































Rank


Journal


Impact factor


1


CA A cancer journal for clinicians


187.04


2


NEJM


72.046


3


Nature reviews drug discovery


57


4


Chemical reviews


47.928


5


Lancet


47.831


19


Lancet oncology


33.9


22


Cell


30.41


23


Nature medicine


29.886


48


BMJ


20.785


167


PLOS medicine


11.862


252


Cancer Research


9.122


856


Oral Oncology


4.794


1829


Head and Neck


3.376


2277


Trends in hearing


3.024


2378


JAMA otolaryngology head and neck surgery


2.951


2454


Hearing research


2.906


2564


Ear and hearing


2.842


3129


Clinical otolaryngology


2.523


3236


Laryngoscope


2.471


3509


Rhinology


2.35


3709


Otolaryngology—head and neck surgery


2.276


4189


Dysphagia


2.077


4326


Otology and neurorotology


2.024


5408


European archives of otorhinolaryngology


1.66


8805


Journal of laryngology and otology


0.844


9368


Laryngo-rhino-otologie


0.732


9422


HNO


0.723


10145


B-ENT


0.578


Note: In any given year, the impact factor of a journal is the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years.


Critical appraisal of the literature has come a long way since inception, thanks to the pioneering work of Sir Muir Gray, who set up the Critical Appraisal Skills Programme (CASP) in 1993. Research methodologists realised early on that using checklists for systematic literature evaluation permits a rapid and comprehensive assimilation of critical appraisal skills. Since then, many organisations set up task groups to generate checklists, scoring systems and visual aids for the purpose (imageTable 46.2). These include University of Alberta, University of Adelaide, University of Bristol, Cardiff University, McMaster University, National Institute for Health and Care Excellence, University of Oxford, University of Sheffield, Scottish Intercollegiate Guidelines Network, University of South Australia and independent ventures such as the GRACE initiative. The rest of this chapter describes the steps and pitfalls that will allow the reader to meet this aim, largely set along the principles of the CASP. Some of these organisations have recorded videos of their critical appraisal teaching programme, available for free online.


image Table 46.2 Common checklists used for the various study types



































Study type


Checklist


Randomised controlled trials


CASP
Cochrane RoB tool
EPOC RoB tool
GATE CAT
SURE


Nonrandomised controlled trials


EPOC RoB tool
GATE CAT
SURE


Systematic reviews and meta-analyses


CASP
AMSTAR 2
SURE
GATE CAT


Case-controlled studies


GATE CAT
CASP


Cohort and cross-sectional study


GATE CAT
Cochrane RoB tool
EPOC RoB tool
Newcastle–Ottawa scale
EPHPP
GRACE
CASP


Economic evaluations


CHEERS
CASP
NICE


Qualitative studies


SURE
CASP


Diagnostic accuracy studies


CASP
QUADAS-2


Abbreviations: AMSTAR, a measurement tool to assess systematic reviews (http://www.amstar.ca/index.php); CASP, Critical Appraisal Skills Programme (http://www.casp-uk.net/casp-tools-checklists); EPHPP, Effective Public Health Practice Project (http://www.ephpp.ca/tools.html); EPOC, Effective Practice and Organisation of Care (http://epoc.cochrane.org/); GATE CAT, Graphic Appraisal Tool for Epidemiology Critically Appraised Topics (https://www.fmhs.auckland.ac.nz/en/soph/about/our-departments/epidemiology-and-biostatistics/research/epiq/evidence-based-practice-and-cats.html); GRACE, Good ReseArch for Comparative Effectiveness initiative (https://www.graceprinciples.org/about-grace.html); NICE Guidelines, The Manual Appendix H. pp7–20; QUADAS, Quality of primary diagnostic accuracy studies; SURE, Specialist Unit for Review Evidence (http://www.cardiff.ac.uk/specialist-unit-for-review-evidence/resources/critical-appraisal-checklists); RoB, Risk of Bias; Newcastle–Ottawa scale (http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp).


46.4 Reporting Guidelines


Akin to critical appraisal checklists, reporting guidelines (RG) have evolved, the purpose of which is to ensure accurate and transparent information is provided about study methods and findings. The framework of RG helps researchers to report all relevant aspects of a study, and permits peer reviewers to assess adequacy of reporting. The best known of all RGs is the CONSORT statement, first published in 1996 (revised in 2001 and 2010). The profusion of RGs led to the creation of the EQUATOR (Enhancing the QUAlity and Transparency Of health Research; http://www.equator-network.org/) network, an ‘umbrella’ organisation that brings together research methodologists, guideline development groups, journal editors, peer reviewers and funders. The EQUATOR site is a one-stop shop which lists RGs for 384 study types, thus ensuring that one is available for almost any study being written up. imageTable 46.3 identifies RGs for the main study types. It is common practice for funders and high-impact journals to demand a RG checklist at the time of submission of the report. In addition to this, specialty specific guidelines also exist; for instance, the STROCSS statement (STrengthening the Reporting Of Cohort Studies in Surgery) and PROCESS guidelines (Preferred Reporting Of CasE Series in Surgery) are specifically meant for surgical cohort and case series in the surgical specialties. Despite the profusion of RGs, they are not fully enforced by journals, primarily owing to resource constraints.


46.5 Critical Appraisal Checklists versus Reporting Guidelines


It should be noted that RG and CA checklists are distinct and have different purposes. It has been said that use of RGs is like ‘turning the light on before you clean up a room: It doesn’t clean it for you, but does tell you where the problems are*. To extend this analogy further, a critical appraisal helps the visitor to the room to sift through the ‘dirt’ and understand the problems better. It is the author’s practice to carefully study the RG checklist if published with the paper. This will help focus the CA on the areas where the RG has identified deficiencies.


* LaCombe MA, Davidoff F. On Being a Doctor 2. Ann Intern Med.;132:671–672.


image Table 46.3 Reporting guidelines for main study types. For more information on these reporting guidelines please visit the EQUATOR website (http://www.equator-network.org/).






































Study type


Reporting guideline


Randomised controlled trials


CONSORT


Systematic reviews and meta-analyses


PRISMA; MOOSE


Observational studies


STROBE


Economic evaluations


CHEERS


Qualitative studies


SRQR, COREQ


Diagnostic accuracy studies


STARD, TRIPOD


Study protocols


SPIRIT, PRISMA-P


Clinical practice guidelines


AGREE, RIGHT


Search strategies


PRESS


46.6 Risk of Bias Assessment


It is crucial for the critical appraiser to have the skills to identify the risk of bias across a variety of studies. Bias is a systematic error in the results or inferences of study; bias should not to be confused with imprecision. In the presence of bias, replication of the same study would reach the same wrong conclusion, but imprecision refers to random error and replications of the same study will provide different answers. It must be recognised that bias is not a dichotomous variable, qualified by its presence or absence. On the other hand, the critical appraiser should consider the degree to which bias was prevented by the study design and implementation; given that bias is nearly always present to some degree, the appraiser must consider how bias might influence the results and their applicability. For instance, bias in a systematic review, despite uniform and consistent results about the effect size of an intervention’s effect will reduce the strength of recommendation.


The Cochrane collaboration has defined an excellent risk of bias (RoB) tool, as have other organisations. The Cochrane collaboration also offers Webinars on RoB which is highly recommended. Bias is regularly discussed and taught across the curriculum and this section will not go into detail on the various types of bias.


46.7 Steps to a Critical Appraisal


1. Identity the category of the article.


The questions being asked to determine the quality of a study should be tailored to the type of publication. Common article types that are usually critically evaluated and form the focus of journal clubs include systematic reviews, randomised controlled trials (RCTs) and diagnostic studies; in rarer diseases and the epidemiological literature, cohort studies will be the focus. Less common designs include case-control studies, economic evaluations, qualitative studies and clinical prediction rules. imageFig. 46.1 depicts an algorithm for identifying quantitative (experimental and observational) study designs.


2. Peruse the abstract.


The first question to ask is if the study addresses a clearly focused question. Asking a focused question is a skill, and most researchers early in their career often struggle with this step for a proposed piece of research. There are important lessons in framing the research question that a novice trainee can learn from the publication of a well-planned and executed study. The focused question should include the following elements within it, best summarised as PICO: The Patient population being studied should be succinctly and clearly described; the Intervention being studied should be mentioned with appropriate amount of detail, as should the Comparator; the Outcomes used to measure the study results should be clearly set out as part of this framework. The PICO should be available in the abstract itself, and where journals opt for a structured abstract, this information is readily identified. If, on perusing the abstract, any of the elements of PICO is not clear and if it does not apply to the reader’s practice, the abstract may well be the only component of the paper that is read. The PICO format, evidently, will not apply to all study types.


Mar 31, 2020 | Posted by in OPHTHALMOLOGY | Comments Off on Literature Review and Statistics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access