TL;DR: A systematic literature review follows a rigorous, reproducible methodology to minimize bias and provide comprehensive evidence synthesis. Unlike traditional narrative reviews, systematic reviews require a registered protocol, comprehensive search across multiple databases, dual independent screening, quality assessment with tools like AMSTAR 2, and transparent reporting following PRISMA 2020 guidelines. This guide provides a step-by-step process with templates you can adapt for your dissertation or research paper.
If you’re writing a dissertation, thesis, or research paper, you’ve likely been told to “review the literature.” But what does that actually mean—and how do you produce a review that stands up to rigorous academic scrutiny?
Traditional literature reviews often follow a catalog approach: summarize studies chronologically or thematically, synthesize findings narratively, and identify gaps based on the author’s expertise. While valuable for exploratory research, this approach has significant limitations: it’s susceptible to selection bias, lacks reproducibility, and cannot reliably inform evidence-based practice or policy decisions.
The systematic approach transforms the literature review from a descriptive summary into a rigorous research method. Systematic reviews follow predefined protocols, use exhaustive search strategies, employ dual independent reviewers, assess study quality, and synthesize results with explicit methods. This methodology originated in healthcare (Cochrane Collaboration) but is now standard across disciplines, with specific guidelines like PRISMA 2020 for health sciences, PRISMA-S for search methodology, and PROSPERO registration for protocol transparency.
Research shows that 40-60% of published systematic reviews fail basic quality standards, often due to incomplete search strategies, single-reviewer screening, inadequate quality assessment, or poor reporting. By following this systematic approach, you’ll produce a literature review that is:
This guide walks you through the complete systematic review process, from defining your research question using the PICO framework to writing up your findings according to PRISMA 2020 standards. While we focus on systematic reviews of quantitative studies, we also cover synthesis methods qualitative reviews.
Understanding the distinction between traditional and systematic approaches is essential before you begin.
| Aspect | Traditional Literature Review | Systematic Review |
|---|---|---|
| Research Question | Broad, exploratory, evolves during writing | Focused, predefined using PICO (Population, Intervention, Comparison, Outcomes) |
| Protocol | Informal, optional, may change during writing | Registered prospectively (PROSPERO for health, OSF for other fields) |
| Search Strategy | Selective databases, limited keywords, no documentation | Exhaustive: 6+ databases, controlled vocabulary (MeSH/Emtree), grey literature, full search strings reported |
| Study Selection | Single reviewer, implicit criteria | Dual independent reviewers + third arbitrator for conflicts; inclusion/exclusion criteria table |
| Quality Assessment | Implicit or absent; all studies treated equally | Explicit tool (AMSTAR 2, RoB 2, CASP, QUADAS-2) with dual assessment |
| Data Extraction | Notes and summaries, no standardization | Structured extraction form (study design, participants, interventions, outcomes) in duplicate |
| Synthesis | Narrative summary, thematic grouping | Thematic/narrative synthesis OR quantitative meta-analysis with effect sizes, heterogeneity analysis |
| Reporting Standards | Journal-specific, no checklists | PRISMA 2020 checklist (27 items), flow diagram mandatory |
| Bias Mitigation | Limited acknowledgment | Publication bias assessment (funnel plot, Egger’s test), selective outcome reporting checked |
| Updateability | Static publication | Living systematic reviews possible with ongoing surveillance |
| Time Investment | 4-8 weeks | 6-12 months for full systematic review |
Key takeaway: A systematic review is itself a research study. You’re applying the same rigor to synthesizing existing evidence that you would expect in primary research.
The foundation of any systematic review is a precisely defined research question. Vague questions lead to inconsistent inclusion decisions and unclear synthesis. The PICO framework (Population/Problem, Intervention/Exposure, Comparison, Outcomes) brings clarity and focus.
P (Population/Problem): Who or what are you studying?
I (Intervention/Exposure): What treatment, exposure, or phenomenon?
C (Comparison): What is the alternative?
O (Outcomes): What effects matter?
S (Study design): What types of evidence?
In adults with hypertension (P), does dietary nitrate supplementation (I) compared to placebo or usual care (C) reduce systolic and diastolic blood pressure (O) as measured in randomized controlled trials (S)?
| Component | Definition | Inclusion Criteria | Exclusion Criteria |
|-----------|------------|-------------------|-------------------|
| Population | Adults with hypertension | Age ≥18, diagnosed hypertension (BP ≥140/90 mmHg or medication) | Children, adolescents, pre-eclampsia |
| Intervention | Dietary nitrate | Beetroot juice, nitrate capsules, ≥4 weeks duration | Other supplements (vitamins, minerals) |
| Comparison | Control | Placebo, no intervention, usual care | Other supplements as active comparison |
| Outcomes | Blood pressure | Systolic and diastolic mmHg | Patient satisfaction only, quality of life |
| Study Design | RCTs only | Randomized controlled trials (parallel, crossover) | Observational, case reports, reviews |
Common mistake: PICO too broad (“effects of exercise on health”) or too narrow (“DBP response to 500ml beetroot juice in male athletes aged 25-30”). Find the sweet spot that captures the evidence base while remaining focused.
Source: The Cochrane Handbook emphasizes PICO for structuring review questionshttps://www.cochrane.org/authors/handbooks-and-manuals/handbook/current/chapter-i
Perhaps the most critical distinction between systematic and traditional reviews is prospective protocol registration. Journals increasingly mandate this, and it protects against outcome reporting bias.
Health/Clinical Topics: PROSPERO (International Prospective Register of Systematic Reviews)
Non-Health Topics: OSF (Open Science Framework)
Why register?
Template: Use PROSPERO’s structured template as a guide even for OSF registrations.
Common mistake: Registering protocol after screening begins. Some journals require registration before any screening; at minimum, it should precede final study selection.
Your eligibility criteria operationalize your PICO question into concrete screening rules. These must be explicit, objective, and applied consistently by all reviewers.
| Criterion | Inclusion | Exclusion |
|-----------|-----------|-----------|
| Population | Adults ≥18 years with diagnosed hypertension (BP ≥140/90 mmHg or antihypertensive medication) | Children, adolescents (<18), gestational hypertension only, secondary hypertension |
| Intervention | Dietary nitrate supplementation (beetroot juice, nitrate capsules) ≥2 weeks duration | Multifactorial lifestyle interventions without isolated nitrate effect; other supplements (vitamins, minerals) |
| Comparison | Placebo, no intervention, usual care | Active supplements as sole comparator |
| Outcomes | Must report systolic and/or diastolic BP change | Patient satisfaction, quality of life only |
| Study Design | Randomized controlled trials (parallel, crossover) | Observational, quasi-experimental, case reports, reviews, protocols |
| Publication Date | 2010-2024 | Before 2010 |
| Language | English, Spanish (if translation available) | No English/Spanish abstract available |
| Publication Status | Peer-reviewed journal articles | Conference abstracts only, theses (unless no journal articles) |
Practical tip: Create this table collaboratively with your review team before screening begins. It becomes your screening handbook.
Source: University library guides emphasize clear, objective criteria to reduce reviewer subjectivityhttps://guides.lib.unc.edu/systematic-reviews/
This is where many reviews fall short. The average systematic review searches only 3-4 databases, but standards recommend 6+ to minimize bias. Combined with grey literature and thoughtful keyword selection, your search strategy determines whether you’ll miss critical evidence.
Minimum 6 recommended:
Additional as needed: Business Source Complete, JSTOR, Google Scholar (as supplemental, not primary), ClinicalTrials.gov for unpublished trials.
Why controlled vocabulary matters: Databases index articles using standardized subject headings (MeSH in PubMed, Emtree in Embase). Searching only keywords misses articles where your concept uses different terminology.
Example Topic: Dietary nitrate for hypertension
PubMed example with MeSH:
(hypertension[MeSH Terms] OR "high blood pressure"[Title/Abstract] OR "elevated blood pressure"[Title/Abstract])
AND (dietary nitrate[MeSH Terms] OR "beetroot juice"[Title/Abstract] OR "nitrate supplementation"[Title/Abstract])
AND (randomized controlled trial[Publication Type] OR randomized[Title/Abstract] OR placebo[Title/Abstract])
Filters: English, 2010-2024, Humans
Key techniques:
The PRISMA-S extension specifies 27 items for transparent search reportinghttps://www.sciencedirect.com/science/article/pii/S0040162524006310. Ensure you report:
Common mistake: Using only keywords without subject headings; reporting incomplete search strings; omitting grey literature searches.
Once your search strategy is finalized, run it across all databases. This produces a massive set of citations that must be managed carefully.
The PRISMA flow diagram is mandatory reporting. Track these numbers throughout screening:
Records identified through database searching (n = )
Additional records identified through other sources (n = ) [handsearching, grey literature, etc.]
⬇
Total records before deduplication (n = )
⬇
Records after duplicates removed (n = )
⬇
Records screened (title/abstract) (n = )
Records excluded (n = ) with reasons documented
⬇
Full-text articles assessed for eligibility (n = )
Full-text articles excluded, with reasons (n = ) [provide reasons in table or text]
⬇
Studies included in qualitative synthesis (n = )
Studies included in quantitative synthesis (meta-analysis) (n = )
Documentation: Maintain a screening log with reasons for exclusion at full-text stage. Journal reviewers will expect this.
Common mistake: Failing to document exclusion reasons (PRISMA requirement); not reporting grey literature sources in flow diagram.
Screening titles and abstracts determines initial eligibility, but this must be done in duplicate by independent reviewers.
Common mistake: Single-reviewer screening increases error rates by 15-20%source needed; not calculating Kappa to assess agreement.
Once eligible studies are identified, extract data systematically using a structured form. Do this in duplicate—two reviewers extract independently, then compare and reconcile differences.
| Category | Fields |
|---|---|
| Study identification | Author, year, title, journal, country |
| Study design | RCT, cohort, case-control; specific design details |
| Participants | Sample size, age, gender, diagnostic criteria, setting |
| Intervention | Description, duration, dosage, delivery, adherence |
| Comparison | Placebo, usual care, alternative intervention |
| Outcomes | Primary & secondary outcomes with definitions, measurement tools, time points |
| Results | Means, SDs, n for continuous; events/n for dichotomous; effect estimates if provided |
| Funding | Source, conflicts of interest declared |
| Additional | Follow-up duration, attrition rates, subgroup analyses |
Download a customizable Excel/CSV template with built-in data validation (see our resource library). Include separate tabs for:
Common mistake: Single-reviewer extraction (increases errors); failing to contact authors for missing data (document all attempts).
Not all studies are equally reliable. Quality assessment evaluates methodological rigor to determine how much confidence you can place in each study’s findings. This directly influences your synthesis and certainty ratings.
| Study Design | Recommended Tool | Items |
|---|---|---|
| Systematic reviews of RCTs/observational | AMSTAR 2 | 16 |
| Randomized trials | RoB 2 (Cochrane) | 5 domains × signaling questions |
| Non-randomized studies | ROBINS-I | 7 domains |
| Diagnostic accuracy studies | QUADAS-2 | 4 domains |
| Qualitative studies | CASP qualitative | 10 |
| Mixed methods | MMAT | 21 (5 categories × 4 study designs) |
AMSTAR 2 (A MeaSurement Tool to Assess Systematic Reviews) is most commonly used for systematic reviews themselves. It evaluates:
Scoring: Yes=1, Partially=0.5, No=0 (some items N/A=0.5). Total quality:
Source: Official AMSTAR 2 checklist and documentation at https://amstar.ca/Amstar_Checklist.php
Like extraction, perform quality assessment in duplicate:
Common mistake: Using AMSTAR 2 without understanding that items 9-11 and 13 only apply if meta-analysis conducted (N/A otherwise).
Synthesis integrates findings from included studies. Choose the appropriate method based on your outcome data homogeneity and research question.
Meta-analysis statistically pools effect sizes across studies. Prerequisites:
Statistical considerations:
Effect Size Metrics:
| Outcome Type | Effect Size | Interpretation |
|---|---|---|
| Continuous (same scale) | Mean Difference (MD) | Raw difference in units |
| Continuous (different scales) | Standardized Mean Difference (SMD, Cohen’s d) | Pooled SD units; 0.2=small, 0.5=medium, 0.8=large |
| Dichotomous | Odds Ratio (OR), Risk Ratio (RR) | OR<1 or RR<1 favors intervention |
| Time-to-event | Hazard Ratio (HR) | Instantaneous risk over time |
Heterogeneity is the nemesis of meta-analysis. You must assess:
Model selection:
Action when I² high:
Forest plot: Visual representation showing each study’s effect size and confidence interval, plus pooled estimate. Square size reflects weight; diamond shows pooled CI.
Source: Cochrane Handbook statistical methods chapter at https://www.cochrane.org/authors/handbooks-and-manuals/handbook/current/chapter-10
If studies are too heterogeneous for statistical pooling, organize findings thematically:
Example structure: Instead of “Study 1 found X; Study 2 found Y; Study 3 found Z,” write: “Theme 1: Adherence challenges. Three studies reported poor adherence rates (15-30%)…” Then synthesize patterns across those studies.
The PRISMA 2020 statement provides a 27-item checklist for transparent reporting (BMJ 2021) https://www.prisma-statement.org/, with an extensive explanation and elaboration article (PMC) https://pmc.ncbi.nlm.nih.gov/articles/PMC8007028/. Journals increasingly require PRISMA adherence.
Title: Identify as systematic review (include “systematic review” or “systematic literature review”)
Abstract: Structured (background, objectives, methods, results, conclusions, registration)
Introduction:
Methods:
Results:
Discussion:
Other: Registration, protocol availability, funding, conflicts of interest
Source: PRISMA 2020 checklist and explanationhttps://www.prisma-statement.org/prisma-2020-checklist
Based on analysis of published systematic reviews, here are high-impact errors that compromise quality:
You may also find these resources helpful as you work on your academic writing:
Writing a systematic literature review is a substantial undertaking—typically 6-12 months from protocol to submission. But the rigor pays off in credibility, reproducibility, and academic contribution.
Here’s your action plan:
Need expert assistance? QualityCustomEssays.com provides systematic review support including protocol development, search strategy design, data extraction, quality assessment, statistical analysis, and full writing services. Our team includes PhD-level researchers trained in Cochrane methodology. Contact us for a consultation or order a custom systematic review.
Use this abbreviated checklist during manuscript preparation: