TL;DR: A systematic literature review follows a rigorous, reproducible methodology to minimize bias and provide comprehensive evidence synthesis. Unlike traditional narrative reviews, systematic reviews require a registered protocol, comprehensive search across multiple databases, dual independent screening, quality assessment with tools like AMSTAR 2, and transparent reporting following PRISMA 2020 guidelines. This guide provides a step-by-step process with templates you can adapt for your dissertation or research paper.


Introduction: Why Move Beyond Traditional Literature Reviews?

If you’re writing a dissertation, thesis, or research paper, you’ve likely been told to “review the literature.” But what does that actually mean—and how do you produce a review that stands up to rigorous academic scrutiny?

Traditional literature reviews often follow a catalog approach: summarize studies chronologically or thematically, synthesize findings narratively, and identify gaps based on the author’s expertise. While valuable for exploratory research, this approach has significant limitations: it’s susceptible to selection bias, lacks reproducibility, and cannot reliably inform evidence-based practice or policy decisions.

The systematic approach transforms the literature review from a descriptive summary into a rigorous research method. Systematic reviews follow predefined protocols, use exhaustive search strategies, employ dual independent reviewers, assess study quality, and synthesize results with explicit methods. This methodology originated in healthcare (Cochrane Collaboration) but is now standard across disciplines, with specific guidelines like PRISMA 2020 for health sciences, PRISMA-S for search methodology, and PROSPERO registration for protocol transparency.

Research shows that 40-60% of published systematic reviews fail basic quality standards, often due to incomplete search strategies, single-reviewer screening, inadequate quality assessment, or poor reporting. By following this systematic approach, you’ll produce a literature review that is:

  • Comprehensive: Captures all relevant evidence, not just convenient studies
  • Reproducible: Others could replicate your process
  • Transparent: Your methods and decisions are clearly documented
  • Critical: You evaluate study quality rather than accepting all evidence equally
  • Synthesized: You integrate findings thematically or statistically rather than summarizing sequentially

This guide walks you through the complete systematic review process, from defining your research question using the PICO framework to writing up your findings according to PRISMA 2020 standards. While we focus on systematic reviews of quantitative studies, we also cover synthesis methods qualitative reviews.


What Makes a Literature Review “Systematic”? Key Differences

Understanding the distinction between traditional and systematic approaches is essential before you begin.

Aspect Traditional Literature Review Systematic Review
Research Question Broad, exploratory, evolves during writing Focused, predefined using PICO (Population, Intervention, Comparison, Outcomes)
Protocol Informal, optional, may change during writing Registered prospectively (PROSPERO for health, OSF for other fields)
Search Strategy Selective databases, limited keywords, no documentation Exhaustive: 6+ databases, controlled vocabulary (MeSH/Emtree), grey literature, full search strings reported
Study Selection Single reviewer, implicit criteria Dual independent reviewers + third arbitrator for conflicts; inclusion/exclusion criteria table
Quality Assessment Implicit or absent; all studies treated equally Explicit tool (AMSTAR 2, RoB 2, CASP, QUADAS-2) with dual assessment
Data Extraction Notes and summaries, no standardization Structured extraction form (study design, participants, interventions, outcomes) in duplicate
Synthesis Narrative summary, thematic grouping Thematic/narrative synthesis OR quantitative meta-analysis with effect sizes, heterogeneity analysis
Reporting Standards Journal-specific, no checklists PRISMA 2020 checklist (27 items), flow diagram mandatory
Bias Mitigation Limited acknowledgment Publication bias assessment (funnel plot, Egger’s test), selective outcome reporting checked
Updateability Static publication Living systematic reviews possible with ongoing surveillance
Time Investment 4-8 weeks 6-12 months for full systematic review

Key takeaway: A systematic review is itself a research study. You’re applying the same rigor to synthesizing existing evidence that you would expect in primary research.


Phase 1: Define Your Research Question Using PICO

The foundation of any systematic review is a precisely defined research question. Vague questions lead to inconsistent inclusion decisions and unclear synthesis. The PICO framework (Population/Problem, Intervention/Exposure, Comparison, Outcomes) brings clarity and focus.

Step 1.1: Identify Each PICOS Component

P (Population/Problem): Who or what are you studying?

  • Example: “Adults ≥18 years with hypertension”
  • Include: age, gender, condition, setting, specific characteristics

I (Intervention/Exposure): What treatment, exposure, or phenomenon?

  • Example: “Dietary nitrate supplementation (beetroot juice)”
  • Be specific: dosage, frequency, duration, delivery method

C (Comparison): What is the alternative?

  • Example: “Placebo, no intervention, or usual care”
  • Can be “none” if no comparator exists

O (Outcomes): What effects matter?

  • Example: “Systolic and diastolic blood pressure (mmHg)”
  • Primary vs secondary outcomes; include measurement instruments

S (Study design): What types of evidence?

  • Example: “Randomized controlled trials (RCTs) only”
  • Or: “RCTs and quasi-RCTs”; sometimes left open to include observational studies

Example PICO Question

In adults with hypertension (P), does dietary nitrate supplementation (I) compared to placebo or usual care (C) reduce systolic and diastolic blood pressure (O) as measured in randomized controlled trials (S)?

Template: PICO Definition Table

| Component | Definition | Inclusion Criteria | Exclusion Criteria |
|-----------|------------|-------------------|-------------------|
| Population | Adults with hypertension | Age ≥18, diagnosed hypertension (BP ≥140/90 mmHg or medication) | Children, adolescents, pre-eclampsia |
| Intervention | Dietary nitrate | Beetroot juice, nitrate capsules, ≥4 weeks duration | Other supplements (vitamins, minerals) |
| Comparison | Control | Placebo, no intervention, usual care | Other supplements as active comparison |
| Outcomes | Blood pressure | Systolic and diastolic mmHg | Patient satisfaction only, quality of life |
| Study Design | RCTs only | Randomized controlled trials (parallel, crossover) | Observational, case reports, reviews |

Common mistake: PICO too broad (“effects of exercise on health”) or too narrow (“DBP response to 500ml beetroot juice in male athletes aged 25-30”). Find the sweet spot that captures the evidence base while remaining focused.

Source: The Cochrane Handbook emphasizes PICO for structuring review questionshttps://www.cochrane.org/authors/handbooks-and-manuals/handbook/current/chapter-i


Phase 2: Register Your Protocol Before You Start

Perhaps the most critical distinction between systematic and traditional reviews is prospective protocol registration. Journals increasingly mandate this, and it protects against outcome reporting bias.

Where to Register

Health/Clinical Topics: PROSPERO (International Prospective Register of Systematic Reviews)

Non-Health Topics: OSF (Open Science Framework)

  • URL: https://osf.io/
  • Free, flexible; not discipline-specific
  • Pre-registration with timestamped protocol

Why register?

  1. Transparency: Your methods are public and dated
  2. Accountability: You’re committed to your planned methods (though amendments allowed)
  3. Credibility: reviewers and readers trust your process
  4. Avoid duplication: Others can see your review is underway
  5. Mandatory for publication: Most journals require PROSPERO/OSF number

What to Include in Your Protocol

  • Title (structured if possible)
  • Authors and affiliations
  • Registration date
  • Expected completion date
  • PICO question and PICOS details
  • Eligibility criteria (inclusion/exclusion with rationale)
  • Information sources (databases, dates, additional sources)
  • Search strategy (example search strings for at least one database)
  • Study selection process (number of reviewers, conflict resolution)
  • Data extraction plan (form fields, dual extraction?)
  • Risk of bias assessment (tools to be used)
  • Synthesis methods (narrative vs meta-analysis; effect measures; models)
  • Sensitivity analyses planned
  • Certainty assessment (GRADE if applicable)
  • Funding sources and conflicts of interest

Template: Use PROSPERO’s structured template as a guide even for OSF registrations.

Common mistake: Registering protocol after screening begins. Some journals require registration before any screening; at minimum, it should precede final study selection.


Phase 3: Develop Inclusion and Exclusion Criteria

Your eligibility criteria operationalize your PICO question into concrete screening rules. These must be explicit, objective, and applied consistently by all reviewers.

What to Include in Your Criteria Table

  1. Population characteristics (age ranges, diagnosis, setting)
  2. Intervention specifics (type, dosage, duration, comparator)
  3. Outcome measures (required primary outcomes; acceptable secondary outcomes)
  4. Study designs (RCTs only? Include observational? Publication types?)
  5. Publication date limits (last 10 years? Since landmark study?)
  6. Language restrictions (English only? All languages?)
  7. Geographic limits (if relevant)
  8. Publication status (peer-reviewed only? Include grey literature?)

Example: Inclusion/Exclusion Criteria Table

| Criterion | Inclusion | Exclusion |
|-----------|-----------|-----------|
| Population | Adults ≥18 years with diagnosed hypertension (BP ≥140/90 mmHg or antihypertensive medication) | Children, adolescents (<18), gestational hypertension only, secondary hypertension |
| Intervention | Dietary nitrate supplementation (beetroot juice, nitrate capsules) ≥2 weeks duration | Multifactorial lifestyle interventions without isolated nitrate effect; other supplements (vitamins, minerals) |
| Comparison | Placebo, no intervention, usual care | Active supplements as sole comparator |
| Outcomes | Must report systolic and/or diastolic BP change | Patient satisfaction, quality of life only |
| Study Design | Randomized controlled trials (parallel, crossover) | Observational, quasi-experimental, case reports, reviews, protocols |
| Publication Date | 2010-2024 | Before 2010 |
| Language | English, Spanish (if translation available) | No English/Spanish abstract available |
| Publication Status | Peer-reviewed journal articles | Conference abstracts only, theses (unless no journal articles) |

Practical tip: Create this table collaboratively with your review team before screening begins. It becomes your screening handbook.

Source: University library guides emphasize clear, objective criteria to reduce reviewer subjectivityhttps://guides.lib.unc.edu/systematic-reviews/


Phase 4: Build a Comprehensive Search Strategy

This is where many reviews fall short. The average systematic review searches only 3-4 databases, but standards recommend 6+ to minimize bias. Combined with grey literature and thoughtful keyword selection, your search strategy determines whether you’ll miss critical evidence.

4.1 Choose Your Databases

Minimum 6 recommended:

  1. PubMed/MEDLINE (biomedical, life sciences)
  2. Embase (biomedical, pharmacology, European literature)
    • Elsevier; often institutional subscription
    • Emtree controlled vocabulary; covers more European journals than MEDLINE
  3. Cochrane Central Register of Controlled Trials (CENTRAL)
    • Free via Cochrane Library
    • Gold standard for RCTs
  4. CINAHL (nursing, allied health)
    • EBSCO; important for non-medical healthcare
  5. Web of Science or Scopus (multidisciplinary)
    • Citation tracking, broader coverage beyond biomedicine
  6. PsycINFO or ERIC (psychology or education, respectively)
    • Discipline-specific coverage

Additional as needed: Business Source Complete, JSTOR, Google Scholar (as supplemental, not primary), ClinicalTrials.gov for unpublished trials.

4.2 Develop Search Strings Using Controlled Vocabulary

Why controlled vocabulary matters: Databases index articles using standardized subject headings (MeSH in PubMed, Emtree in Embase). Searching only keywords misses articles where your concept uses different terminology.

Example Topic: Dietary nitrate for hypertension

PubMed example with MeSH:

(hypertension[MeSH Terms] OR "high blood pressure"[Title/Abstract] OR "elevated blood pressure"[Title/Abstract])
AND (dietary nitrate[MeSH Terms] OR "beetroot juice"[Title/Abstract] OR "nitrate supplementation"[Title/Abstract])
AND (randomized controlled trial[Publication Type] OR randomized[Title/Abstract] OR placebo[Title/Abstract])
Filters: English, 2010-2024, Humans

Key techniques:

  • Use OR within concepts (synonyms, spelling variations)
  • Use AND between concepts
  • Truncation () captures word variants: “randomiz” finds randomize, randomized, randomization
  • Wildcards (?) for single character: “behavio?r” finds behavior/behaviour
  • Phrase searching with quotes: “blood pressure”
  • Field tags: [MeSH], [Title/Abstract], [Publication Type]

4.3 Document PRISMA-S Extension Requirements

The PRISMA-S extension specifies 27 items for transparent search reportinghttps://www.sciencedirect.com/science/article/pii/S0040162524006310. Ensure you report:

  • All databases with exact search dates
  • Full electronic search strategy for at least one database (copy the entire search string)
  • Any limits/filters applied (language, date, study type)
  • Grey literature search methods (ClinicalTrials.gov, OpenGrey, ProQuest)
  • Additional sources (handsearching reference lists, contacting experts)

Common mistake: Using only keywords without subject headings; reporting incomplete search strings; omitting grey literature searches.


Phase 5: Execute the Search and Track Results with PRISMA Flow Diagram

Once your search strategy is finalized, run it across all databases. This produces a massive set of citations that must be managed carefully.

Step 5.1: Import and Deduplicate

  • Export results from each database (RIS, BibTeX, or text format)
  • Import into reference manager (Zotero, EndNote, Mendeley)
  • Use deduplication function (but review duplicates manually—false matches occur)
  • Export deduplicated set to systematic review software

Step 5.2: PRISMA Flow Diagram Template

The PRISMA flow diagram is mandatory reporting. Track these numbers throughout screening:

Records identified through database searching (n = )
Additional records identified through other sources (n = ) [handsearching, grey literature, etc.]
⬇
Total records before deduplication (n = )
⬇
Records after duplicates removed (n = )
⬇
Records screened (title/abstract) (n = )
Records excluded (n = ) with reasons documented
⬇
Full-text articles assessed for eligibility (n = )
Full-text articles excluded, with reasons (n = ) [provide reasons in table or text]
⬇
Studies included in qualitative synthesis (n = )
Studies included in quantitative synthesis (meta-analysis) (n = )

Documentation: Maintain a screening log with reasons for exclusion at full-text stage. Journal reviewers will expect this.

Common mistake: Failing to document exclusion reasons (PRISMA requirement); not reporting grey literature sources in flow diagram.


Phase 6: Study Selection Process

Screening titles and abstracts determines initial eligibility, but this must be done in duplicate by independent reviewers.

Dual Independent Screening

  1. Two reviewers screen each record independently
  2. Use systematic review software (Covidence, Rayyan, JBI Sumari) to mask each other’s decisions
  3. Resolve conflicts through discussion; if unresolved, third reviewer arbitrates
  4. Record inter-rater reliability (Cohen’s Kappa; >0.6 acceptable)

Inclusion/exclusion decision rules

  • Definitive inclusions: Clearly meets all criteria → include
  • Definitive exclusions: Clearly violates any criterion → exclude
  • Uncertain: Requires full-text review → proceed to full-text
  • Conflict: Reviewers disagree → discuss, then arbitrate

Common mistake: Single-reviewer screening increases error rates by 15-20%source needed; not calculating Kappa to assess agreement.


Phase 7: Data Extraction

Once eligible studies are identified, extract data systematically using a structured form. Do this in duplicate—two reviewers extract independently, then compare and reconcile differences.

Essential Data Extraction Fields

Category Fields
Study identification Author, year, title, journal, country
Study design RCT, cohort, case-control; specific design details
Participants Sample size, age, gender, diagnostic criteria, setting
Intervention Description, duration, dosage, delivery, adherence
Comparison Placebo, usual care, alternative intervention
Outcomes Primary & secondary outcomes with definitions, measurement tools, time points
Results Means, SDs, n for continuous; events/n for dichotomous; effect estimates if provided
Funding Source, conflicts of interest declared
Additional Follow-up duration, attrition rates, subgroup analyses

Template: Data Extraction Form

Download a customizable Excel/CSV template with built-in data validation (see our resource library). Include separate tabs for:

  • Study characteristics table (to be included in report)
  • Detailed outcomes data (for meta-analysis)
  • Risk of bias assessment (next phase)

Common mistake: Single-reviewer extraction (increases errors); failing to contact authors for missing data (document all attempts).


Phase 8: Quality Assessment (Critical Appraisal)

Not all studies are equally reliable. Quality assessment evaluates methodological rigor to determine how much confidence you can place in each study’s findings. This directly influences your synthesis and certainty ratings.

8.1 Choose the Right Tool

Study Design Recommended Tool Items
Systematic reviews of RCTs/observational AMSTAR 2 16
Randomized trials RoB 2 (Cochrane) 5 domains × signaling questions
Non-randomized studies ROBINS-I 7 domains
Diagnostic accuracy studies QUADAS-2 4 domains
Qualitative studies CASP qualitative 10
Mixed methods MMAT 21 (5 categories × 4 study designs)

AMSTAR 2 (A MeaSurement Tool to Assess Systematic Reviews) is most commonly used for systematic reviews themselves. It evaluates:

  1. Research question with PICO
  2. Protocol registered before review commencement
  3. Literature search comprehensive (≥3 sources + multiple strategies)
  4. Study selection in duplicate
  5. Data extraction in duplicate
  6. Excluded studies listed with reasons
  7. Included studies described adequately
  8. Risk of bias assessed critically (RoB tool used)
  9. Risk of bias assessment used appropriately in synthesis
  10. Appropriate statistical methods for meta-analysis
  11. Effect of risk of bias on meta-analysis assessed (sensitivity)
  12. Publication bias assessed
  13. Heterogeneity explained/explored
  14. Effect sizes and precision reported
  15. Interpretation discusses bias, heterogeneity, strengths/limitations
  16. Competing interests declared

Scoring: Yes=1, Partially=0.5, No=0 (some items N/A=0.5). Total quality:

  • High: 15-16
  • Medium: 11-14
  • Low: 8-10.5
  • Critically low: ≤7.5

Source: Official AMSTAR 2 checklist and documentation at https://amstar.ca/Amstar_Checklist.php

8.2 Dual Independent Assessment

Like extraction, perform quality assessment in duplicate:

  1. Two reviewers score each study independently using the tool
  2. Calculate Kappa for inter-rater agreement (>0.6 target)
  3. Resolve discrepancies through discussion or third reviewer

Common mistake: Using AMSTAR 2 without understanding that items 9-11 and 13 only apply if meta-analysis conducted (N/A otherwise).


Phase 9: Data Synthesis

Synthesis integrates findings from included studies. Choose the appropriate method based on your outcome data homogeneity and research question.

9.1 When Meta-Analysis Is Appropriate

Meta-analysis statistically pools effect sizes across studies. Prerequisites:

  • Studies sufficiently similar (PICO, design, outcomes)
  • Outcome data available (means/SDs or dichotomous 2×2 tables)
  • Typically 3+ studies (some tools work with 2)

Statistical considerations:

Effect Size Metrics:

Outcome Type Effect Size Interpretation
Continuous (same scale) Mean Difference (MD) Raw difference in units
Continuous (different scales) Standardized Mean Difference (SMD, Cohen’s d) Pooled SD units; 0.2=small, 0.5=medium, 0.8=large
Dichotomous Odds Ratio (OR), Risk Ratio (RR) OR<1 or RR<1 favors intervention
Time-to-event Hazard Ratio (HR) Instantaneous risk over time

Heterogeneity is the nemesis of meta-analysis. You must assess:

  • I² statistic: Percentage of variability due to between-study heterogeneity rather than chance
    • 0-40%: might not be important
    • 30-60%: moderate heterogeneity
    • 50-90%: substantial
    • 75-100%: considerable
  • Q-test: P < 0.10 suggests heterogeneity
  • Tau²: between-study variance estimate

Model selection:

  • Fixed-effect model: assumes one true effect size (homogeneous studies)
  • Random-effects model: assumes distribution of true effect sizes (heterogeneous studies); more conservative

Action when I² high:

  1. Explore prespecified subgroups (e.g., age groups, intervention dosage)
  2. Meta-regression if ≥10 studies
  3. Sensitivity analysis excluding high-risk-of-bias studies
  4. Consider narrative synthesis if heterogeneity too high

Forest plot: Visual representation showing each study’s effect size and confidence interval, plus pooled estimate. Square size reflects weight; diamond shows pooled CI.

Source: Cochrane Handbook statistical methods chapter at https://www.cochrane.org/authors/handbooks-and-manuals/handbook/current/chapter-10

9.2 When Narrative or Thematic Synthesis Is Appropriate

If studies are too heterogeneous for statistical pooling, organize findings thematically:

  • Group studies by conceptual theme (not by individual study!)
  • Compare/contrast findings across studies within each theme
  • Identify patterns, contradictions, gaps
  • Use tables to present study characteristics and results

Example structure: Instead of “Study 1 found X; Study 2 found Y; Study 3 found Z,” write: “Theme 1: Adherence challenges. Three studies reported poor adherence rates (15-30%)…” Then synthesize patterns across those studies.


Phase 10: Writing and Reporting According to PRISMA 2020

The PRISMA 2020 statement provides a 27-item checklist for transparent reporting (BMJ 2021) https://www.prisma-statement.org/, with an extensive explanation and elaboration article (PMC) https://pmc.ncbi.nlm.nih.gov/articles/PMC8007028/. Journals increasingly require PRISMA adherence.

PRISMA 2020 Structure

Title: Identify as systematic review (include “systematic review” or “systematic literature review”)

Abstract: Structured (background, objectives, methods, results, conclusions, registration)

Introduction:

  • Rationale: Why is this review needed?
  • Objectives: PICO question clearly stated

Methods:

  • Protocol registration (PROSPERO/OSF number)
  • Eligibility criteria (PICOS, inclusion/exclusion)
  • Information sources (databases, dates, additional sources)
  • Search strategy (full search string for at least one database; report PRISMA-S extension if available)
  • Study selection process (number of reviewers, conflict resolution)
  • Data collection process (dual extraction?)
  • Data items extracted
  • Risk of bias assessment (tools, dual assessment?)
  • Effect measures (effect sizes, models)
  • Synthesis methods (how synthesized)
  • Risk of bias across studies (addressed?)
  • Certainty assessment (GRADE if used)

Results:

  • Study selection (PRISMA flow diagram + narrative numbers)
  • Study characteristics (table summarizing included studies)
  • Risk of bias within studies (graph and table)
  • Results of individual studies (effect size table)
  • Results of syntheses (pooled estimates, heterogeneity measures, forest plot)
  • Risk of bias across studies (funnel plot if ≥10 studies, Egger’s test)
  • Certainty of evidence (GRADE summary of findings table)

Discussion:

  • Summary of evidence in context of existing knowledge
  • Limitations (at review level and individual studies)
  • Conclusions (aligned with objectives, evidence certainty)

Other: Registration, protocol availability, funding, conflicts of interest

Source: PRISMA 2020 checklist and explanationhttps://www.prisma-statement.org/prisma-2020-checklist


15 Common Mistakes to Avoid

Based on analysis of published systematic reviews, here are high-impact errors that compromise quality:

Methodological Errors

  1. ❌ Unregistered Protocol
    • Problem: Starting review without published protocol (PROSPERO/OSF)
    • Impact: Questions about methodological rigor; many journals now require registration number
    • Fix: Register before screening begins
  2. ❌ Inadequate Search Strategy
    • Problem: <6 databases, missing controlled vocabulary, no grey literature, incomplete search strings
    • Impact: Selection bias from missing relevant studies
    • Statistic: Average review searches 3-4 databases vs recommended 6+
    • Fix: Include PubMed, Embase, Cochrane, Web of Science, Scopus, 1-2 discipline-specific; search grey literature (ClinicalTrials.gov, OpenGrey)
  3. ❌ Single Reviewer Screening/Extraction
    • Problem: One person makes all inclusion/exclusion decisions and extracts data
    • Impact: 15-20% higher error rate
    • Fix: Dual independent reviewers for screening and extraction; calculate Kappa
  4. ❌ No Quality Assessment
    • Problem: Including all studies without evaluating methodological rigor
    • Impact: Low-quality studies distort findings; misleading conclusions
    • Fix: Use AMSTAR 2, RoB 2, or other appropriate tool; include quality in synthesis
  5. ❌ Post-hoc Eligibility Criteria
    • Problem: Changing inclusion/exclusion after seeing search results
    • Impact: Introduces bias, invalidates review
    • Fix: Define criteria in protocol and adhere (document legitimate exceptions)

Statistical Errors

  1. ❌ Ignoring Heterogeneity
    • Problem: Pooling studies without I² or Q-test
    • Impact: Combining dissimilar studies produces meaningless average
    • Fix: Always assess heterogeneity; if I² >50-60%, explore subgroups or use random-effects model
  2. ❌ Funnel Plot with <10 Studies
    • Problem: Creating funnel plots to assess publication bias with too few studies
    • Impact: Funnel plots unreliable with n<10 (Cochrane Handbook recommendation)
    • Fix: Acknowledge limitation or use alternative methods
  3. ❌ Incorrect Effect Size Mixing
    • Problem: Using OR in some studies, RR in others without conversion or rationalization
    • Impact: Pooled estimate invalid
    • Fix: Convert to consistent effect size; justify choice

Reporting Errors

  1. ❌ Missing PRISMA Flow Diagram
    • Problem: No flow diagram showing study selection process
    • Impact: Fails journal and PRISMA requirements; readers can’t assess completeness
    • Fix: Create flow diagram with all screening stages and exclusion reasons
  2. ❌ Mosaic Plagiarism (Patchwriting)
    • Problem: Copying source text and changing a few words without attribution
    • Impact: Academic misconduct even if unintentional
    • Fix: Read source, close it, write from memory in own words; always cite paraphrased ideas
  3. ❌ Sequential Summary Instead of Synthesis
    • Problem: “Study A found X. Study B found Y. Study C found Z…” (catalog style)
    • Impact: No analytical interpretation; low-value content
    • Fix: Group thematically; compare/contrast; identify patterns and contradictions across studies
  4. ❌ Selective Outcome Reporting
    • Problem: Only including outcomes with statistically significant results
    • Impact: Publication bias at review level; distorted conclusions
    • Fix: Report all pre-specified outcomes; include non-significant results

Data Management Errors

  1. ❌ Inadequate Documentation
    • Problem: Cannot reproduce review or justify inclusion/exclusion decisions
    • Impact: Questions validity; impossible to update as living review
    • Fix: Maintain research log: screening decisions, conflicts, issues, resolutions with dates
  2. ❌ Not Updating Search Before Submission
    • Problem: Search completed 6+ months before submission
    • Impact: Missing recent studies; review outdated at publication
    • Fix: Run updated search 1-2 weeks before submission; document new studies
  3. ❌ Ignoring Contradictory Evidence
    • Problem: Only highlighting studies supporting your hypothesis
    • Impact: Biased interpretation, misleading conclusions
    • Fix: Discuss contradictory findings; explore reasons (subgroup differences, study quality)

Related Guides

You may also find these resources helpful as you work on your academic writing:


Conclusion: Your Next Steps

Writing a systematic literature review is a substantial undertaking—typically 6-12 months from protocol to submission. But the rigor pays off in credibility, reproducibility, and academic contribution.

Here’s your action plan:

  1. Week 1-2: Finalize PICO question and eligibility criteria with your team. Draft protocol.
  2. Week 3-4: Register protocol on PROSPERO (health) or OSF (other fields). Develop comprehensive search strings with librarian consultation.
  3. Week 5: Execute searches across 6+ databases; import and deduplicate citations.
  4. Week 6-7: Title/abstract screening in duplicate using systematic review software; document PRISMA flow.
  5. Week 8-9: Full-text retrieval and screening; document exclusion reasons; update PRISMA flow.
  6. Week 10-12: Data extraction in duplicate; reconcile discrepancies; contact authors for missing data.
  7. Week 13-14: Quality assessment with AMSTAR 2 or appropriate tool; calculate inter-rater reliability.
  8. Week 15-18: Data synthesis (thematic or meta-analysis); create tables, forest plots, GRADE profiles.
  9. Week 19-20: Write manuscript following PRISMA 2020 structure; ensure all 27 checklist items addressed.
  10. Week 21-22: Internal peer review; incorporate feedback; finalize references; submit to journal.

Need expert assistance? QualityCustomEssays.com provides systematic review support including protocol development, search strategy design, data extraction, quality assessment, statistical analysis, and full writing services. Our team includes PhD-level researchers trained in Cochrane methodology. Contact us for a consultation or order a custom systematic review.


Quick Reference: PRISMA 2020 Checklist

Use this abbreviated checklist during manuscript preparation:

  • Title identifies as systematic review
  • Structured abstract includes registration
  • Introduction states PICO objectives
  • Methods section includes protocol registration number
  • Eligibility criteria (PICOS) clearly defined
  • All databases with dates searched reported
  • Full search strategy for ≥1 database included
  • Study selection process described (duplicate reviewers?)
  • Data extraction in duplicate stated
  • Risk of bias tool named and used appropriately
  • Synthesis methods (narrative/meta-analysis) specified
  • PRISMA flow diagram provided
  • Study characteristics table included
  • Risk of bias assessment presented
  • Results of individual studies (effect sizes)
  • Syntheses results with heterogeneity (I², τ²)
  • Publication bias assessed (funnel, Egger) if ≥10 studies
  • GRADE/Certainty of evidence reported
  • Limitations of evidence and review discussed
  • Conclusions justified by results

I’m new here 15% OFF