1.
Xu O, De Lima B, DeVane K, et al. Peer-to-Peer Case Review as a Strategy to Improve Sepsis Education in Graduate Medical Education. SMRJ. 2026;10(3):51-58. doi:10.51894/001c.159860
Download all (1)
  • Figure 1. Pre-Post Sepsis Confidence and Knowledge Scores. Percent positive self-reported confidence on diagnosing sepsis and percent correct (out of 12 items) on the sepsis knowledge assessment before and after the intervention.

Abstract

Introduction

We conducted an educational intervention using a peer-to-peer case review process to improve emergency medicine (EM) resident sepsis knowledge and diagnostic confidence. Local quality audits in our academic emergency department (ED) identified poor compliance in addressing adult cases of severe sepsis and septic shock, suggesting an educational gap. We evaluated whether a facilitated, peer-to-peer case review process improved EM resident sepsis knowledge and diagnostic confidence.

Methods

We conducted a single-site, pre-test/post-test educational QI evaluation among senior EM residents. Using routinely audited severe sepsis and septic shock cases that did not meet bundle expectations, residents completed brief, asynchronous peer case reviews with a standardized checklist and provided structured written feedback to the original care teams. Educational impact was evaluated at Kirkpatrick Level 2 (learning) using paired pre–post assessments: a 12-item single-best-answer knowledge test and a 4-point self-reported diagnostic confidence survey. Paired pre/post knowledge scores were compared with the Wilcoxon signed rank test; confidence was summarized descriptively. The institutional review board determined this not human research.

Results

Mean knowledge accuracy improved from 70% (8.36/12) to 85% (10.14/12), an increase of 14.9 percentage points (p = 0.003). The proportion scoring in the 76–100% band increased by 29%, and the number of scores over 90% increased by 53%. Confidence increased for sepsis (71.4% to 78.6%) and remained constant for severe sepsis at 78.6% but decreased for septic shock (92.9% to 85.7%).

Conclusion

A resident-led, asynchronous peer case review process was associated with significantly improved sepsis knowledge and minimal gains in diagnostic confidence for sepsis and severe sepsis. This low-resource approach is feasible for unit-level implementation and may be adaptable to other time-sensitive conditions with complex bundle requirements.

INTRODUCTION

Early recognition and timely management of sepsis remain challenging in the emergency department (ED), particularly for trainees who must interpret evolving diagnostic criteria while making rapid clinical decisions. Despite ongoing educational efforts, residents often struggle to identify subtle or atypical presentations and to apply guideline-concordant therapies under significant time pressure. These challenges can delay the early recognition and initiation of evidence-based treatments that are essential to improving patient outcomes.1–5

In the United States, hospitals participate in the Centers for Medicare and Medicaid (CMS) Hospital Inpatient Quality Reporting Program, which includes the SEP-1 sepsis measure used for national benchmarking and public reporting.6–8 Deviations from these bundle elements are frequently identified in local quality audits, suggesting targeted educational gaps that affect both care compliance and resident learning needs.

Local Problem

Using internal quality improvement (QI) audit abstraction with CMS-aligned criteria, we found frequent deviations from expected sepsis bundle care in our academic ED, affecting nine of 22 severe sepsis or septic shock cases (40.9%). The target benchmark for sepsis management is 75%. The department uses a ≥ 75% compliance threshold as an institutional quality target for CMS-aligned sepsis bundle elements and represents a locally defined performance goal informed by CMS reporting expectations and historical departmental performance. These deviations appeared to reflect, in part, gaps in sepsis recognition and clinical knowledge. Because emergency medicine (EM) residents provide much of the frontline ED care and commonly initiate early sepsis evaluation and treatment, we targeted this group for an educational intervention reinforcing diagnostic definitions, bundle elements, and time-sensitive management.1,7

Aim

The aim of this project was to improve senior emergency medicine residents’ sepsis knowledge and diagnostic confidence through a facilitated peer-to-peer case review of audit-identified sepsis cases, measured using pre-post assessments.

METHODS

Study Design

This project was conducted as an educational QI evaluation using a single-site, pre-post design to assess changes in resident sepsis knowledge and self-reported diagnostic confidence following a peer-to-peer case review intervention.

Setting and Context

This study was conducted in a single academic, adult-only ED with approximately 49,000 annual patient visits. The ED is staffed by 68 full-time and 20 adjunct faculty and serves a three-year EM residency program with approximately 33 residents across post-graduate year (PGY)-1 through PGY-3 levels. Residents practice under attending supervision with progressive autonomy across training years. The study site uses the Epic electronic health record (Epic Systems, Verona, WI) for all clinical documentation and order entry. The department maintains an established sepsis quality audit process, supported by a CMS abstractor, to identify cases with deviations from CMS-aligned bundle expectations.[1]

Participants

Fifteen of 21 senior residents participated during the study period; 14 completed paired pre- and post-intervention assessments (93.3%). Eligible participants included 10 PGY-3 residents and 5 PGY-2 residents. Because the intervention was designed for all senior trainees and the overall sample size was small, PGY-2 and PGY-3 participants were analyzed together as a single cohort, and no subgroup analysis was performed. Residents who were not eligible (n = 6) were on rotations that precluded QI review activities or were on vacation during the intervention window. All participants had received the same standard institutional sepsis training provided during residency orientation and ongoing clinical practice, and none had completed any additional structured or formal sepsis-focused education in the months preceding the intervention, resulting in generally comparable baseline exposure across PGY-2 and PGY-3 residents.

Intervention

Using internal QI audits, we identified adult ED cases involving severe sepsis or septic shock with missed or delayed recognition and/or treatment relative to CMS-aligned bundle expectations. Cases were drawn from routinely audited encounters (rather than resident-selected) and assigned to senior residents sequentially on a rotational basis to distribute workload across participants. Resident reviewers completed structured, asynchronous written case reviews using a standardized checklist aligned with local CMS-based sepsis definitions and bundle components, which is provided in full in Appendix 1, between April and July 2024 (Appendix 1).1,7 The checklist addressed sepsis time-zero identification and key bundle elements (e.g., lactate measurement, blood cultures, antibiotic timing, fluid resuscitation, and reassessment and/or vasopressor initiation when indicated).1,7 Reviewers could consult faculty as needed, although faculty did not routinely review or moderate resident-generated feedback before dissemination.. Each review required approximately 15 to 30 minutes to complete. Seventeen audited cases were reviewed, with each resident reviewing 1–2 cases. After completing each review, the resident reviewer provided structured written feedback to the original care team, highlighting diagnostic decision points, missed opportunities relative to bundle expectations, and concrete learning points. No additional sepsis training occurred during this time.

Measures

Knowledge assessment: A locally developed, 12-item, single-best-answer questionnaire was administered. Each item included four response options with one correct answer. Knowledge outcomes were calculated as raw scores (0-12 correct) and converted to percentages correct; percentages were used as the primary reporting metric throughout the manuscript to facilitate interpretability, with raw scores reported secondarily when describing score distributions. The instrument assessed five domains based on CMS bundle requirements: (1) systemic inflammatory response syndrome (SIRS) criteria; (2) differentiation between sepsis and severe sepsis; (3) diagnostic criteria and organ dysfunction thresholds for severe sepsis; (4) diagnostic criteria for septic shock; and (5) initial management priorities, including recommended fluid resuscitation volumes, empiric broad-spectrum antibiotic therapy, and organism coverage targets.1,7

Confidence survey: Resident self-reported confidence in diagnosing sepsis, severe sepsis, and septic shock was assessed using a 4-point Likert scale (1 = not at all confident to 4 = extremely confident). For analytic purposes, responses were dichotomized: ratings of 3 or 4 were classified as “positive” and ratings of 0 or 1 were classified as “negative.” Dichotomization was used to facilitate interpretation of clinically meaningful confidence versus lack thereof in the context of a small sample size, though we acknowledge that collapsing response categories reduces granularity.

Validity review: Experts in EM and professionals with experience in survey methodology reviewed the instruments for face validity and content coverage. Survey design expertise was consulted during the development of the instrument.

Analysis

Knowledge outcomes were analyzed in R (version 4.1.3, R Core Team, 2022) using a Wilcoxon signed rank test, comparing matched overall pre- and post-intervention raw scores from the same individuals; this version of R was current and in routine institutional use during the study period (April – July 2024). As an educational QI evaluation, analyses were descriptive and focused on learning outcomes rather than hypothesis testing for generalizable effects. Complete case analysis was used to handle missing data. Confidence outcomes were summarized descriptively as the proportions before and after the intervention. No subgroup analyses were conducted by PGY. No adjustment for multiple comparisons was performed.

Ethical Considerations

The Oregon Health and Science University Institutional Review Board determined that this educational quality improvement activity did not constitute human subjects research. Although designated as a quality improvement initiative, the project was conducted with the intent to disseminate findings for scholarly and educational purposes.

RESULTS

Fourteen residents completed both surveys. Overall knowledge scores increased from pre- to post-intervention (Figure 1). Overall knowledge significantly improved from 70.0% pre-intervention to 85.0% post-intervention (p = 0.003). On the 12-item assessment, mean scores increased from 8.4 (SD 1.74) to 10.1 (SD 1.41), an average gain of 1.79 items. Post-intervention score distributions showed a higher concentration of scores in the upper accuracy ranges. The proportion of residents scoring in the 76–100% accuracy band increased by 29.0 percentage points, and the proportion scoring at least 90% increased by 53.0 percentage points. Item-level changes showed increases in questions assessing diagnostic criteria and organ dysfunction parameters for severe sepsis. Scores for the white blood cell (WBC) differential item related to SIRS criteria remained lower than other items at both time points.

Figure 1
Figure 1.Pre-Post Sepsis Confidence and Knowledge Scores. Percent positive self-reported confidence on diagnosing sepsis and percent correct (out of 12 items) on the sepsis knowledge assessment before and after the intervention.

Abbreviations: Conf, confidence; KQ, knowledge question; SIRS, systematic inflammatory response syndrome; WBC, white blood cell.

Self-reported confidence proportions before and after the intervention are presented in Figure 1. The proportion of residents reporting “positive” confidence increased from 71.4% to 78.6% for sepsis diagnosis and remained at 78.6% for severe sepsis. For septic shock, the proportion reporting “positive” confidence changed from 92.9% to 85.7%. Raw scores are reported to one decimal place and percentages to one decimal place throughout to ensure numerical consistency.

DISCUSSION

In this single-site educational initiative, an asynchronous, resident-led peer case review process was associated with a substantial improvement in sepsis knowledge. The percentage correct increased from 69.6% to 84.5%, an absolute gain of 14.9 percentage points (p = 0.003). The shift toward higher scores suggests that the intervention improved both overall knowledge and mastery of core concepts for a subset of learners. This study employed a case review framework using QI audits as an educational strategy in graduate medical education, integrating established QI processes with clinical teaching and achieving a Kirkpatrick Level 2 (learning) outcome.

Some of the observed improvement in knowledge scores may also reflect a testing effect related to repeated exposure to similar assessment items, although the magnitude of change and item-level patterns suggest learning beyond simple test familiarity.

This intervention incorporated several established educational principles. The use of locally audited cases provided authentic, context-specific learning consistent with case-based learning approaches shown to improve engagement and knowledge acquisition.9–13 The standardized checklist clarified diagnostic and management expectations and functioned as a cognitive aid to reduce omission and ambiguity during review.14,15 Structured written feedback supported reflection on clinical decision-making and reinforced key learning points.16–19 Together, these elements reflect core principles of experiential learning and feedback-informed practice commonly applied in medical education.20 Future work is needed to improve knowledge related to WBC differential findings.

Self-reported confidence improved for diagnosing sepsis and remained constant for severe sepsis but declined modestly for septic shock. This modest decrease in septic shock confidence (from 92.9% to 85.7%) may reflect improved diagnostic calibration after reviewing complex or borderline cases, as exposure to nuanced presentations can heighten awareness of uncertainty rather than indicating diminished competence. Future iterations could intentionally oversample shock-focused cases and add explicit prompts to the review checklist regarding reassessment timing, dynamic response to fluids, and escalation decisions (e.g., vasopressor initiation when indicated).

The intervention was brief, asynchronous, integrated into an established audit pipeline, and required minimal additional resources. Reviews were brief, required, integrated into an established audit pipeline, and conducted asynchronously, which minimized the scheduling burden while leveraging existing QI infrastructure.21,22 Rotational assignment distributed workload and reduced the potential for selective case choice. Notably, this type of intervention may have the potential to support a learning culture oriented towards improvement. Peer-to-peer review may help frame missed opportunities as shared learning rather than individual failure, which could promote psychological safety and encourage discussion of diagnostic uncertainty.23–26 These cultural effects were not directly measured and should be considered hypotheses for future study rather than demonstrated outcomes.

LIMITATIONS

This work is limited by its single-site design with a small sample size, limiting generalizability. It relied on proximal outcomes (knowledge and self-reported confidence) that may have been impacted by retesting effects and response bias in self-reported confidence measures. Although the magnitude of knowledge gains exceeded what is typically expected from test–retest effects in brief assessments, the absence of a control group means that retesting effects cannot be fully excluded. We did not assess downstream clinical outcomes (e.g., bundle adherence, time to antibiotics) or patient outcomes. The knowledge and confidence instruments were locally developed; although reviewed for face and content validity, they were not formally psychometrically validated. Longer-term durability of learning was not assessed.

Despite these limitations, the intervention provides a low-resource, scalable approach to addressing time-sensitive clinical care education. By combining audit-identified cases, a shared checklist, and structured feedback, peer case review may help align resident mental models with institutional expectations and reduce variability in early sepsis care. Similar approaches may be applicable to other ED priorities requiring rapid recognition and standardized initial management (e.g., stroke, acute coronary syndrome, and trauma).

CONCLUSIONS

Facilitated, resident peer case review of audit-identified adult ED sepsis cases was associated with significant improvements in sepsis knowledge among senior residents. This low-resource, asynchronous educational model appears feasible for unit-level. Future work should evaluate whether this educational approach translates into improvements in objective clinical metrics such as sepsis bundle compliance, time-to-antibiotics, and other patient-centered outcomes.


ACKNOWLEDGMENTS

We thank the Oregon Health and Science University Department of Emergency Medicine. We are particularly grateful to the Education Section, Continuous Quality Improvement Program, and to our CMS abstractor, without whom this project would not have been possible. We also acknowledge the use of Microsoft Copilot to improve the clarity and conciseness of previously authored content. No new material was generated; all AI-assisted edits were reviewed and approved, and affirm the accuracy and integrity of the final manuscript.

Accepted: March 19, 2026 EDT

References

1.
Evans L, Rhodes A, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock 2021. Crit Care Med. 2021;49(11):e1063-e1143. doi:10.1097/​CCM.0000000000005337
Google Scholar
2.
Centers for Disease Control and Prevention. Hospital Sepsis Program Core Elements. US Department of Health and Human Services; 2025. Accessed December 20, 2025. https:/​/​www.cdc.gov/​sepsis/​hcp/​core-elements/​index.html
3.
King J, Chenoweth CE, England PC, et al. Early Recognition and Initial Management of Sepsis in Adult Patients. Michigan Medicine, University of Michigan; 2023. Accessed December 20, 2025. https:/​/​www.ncbi.nlm.nih.gov/​books/​NBK598311/​
4.
National Institute of General Medical Sciences. Sepsis. National Institute of General Medical Sciences Accessed December 20, 2025. https:/​/​www.nigms.nih.gov/​education/​fact-sheets/​Pages/​sepsis
5.
De Waele JJ. Importance of timely and adequate source control in sepsis and septic shock. J Intensive Med. 2024;4(3):281-286. doi:10.1016/​j.jointm.2024.01.002
Google Scholar
6.
Centers for Medicare & Medicaid Services. Hospital Inpatient Quality Reporting (IQR) Program. Centers for Medicare & Medicaid Services; 2025. Accessed December 20, 2025. https:/​/​www.cms.gov/​medicare/​quality/​hospital-inpatient-quality-reporting-iqr-program
7.
Centers for Medicare & Medicaid Services. BPCI Advanced Alternate Quality Measure Fact Sheet: Severe Sepsis and Septic Shock Management Bundle (MY4). Centers for Medicare & Medicaid Services Accessed December 20, 2025. https:/​/​www.cms.gov/​priorities/​innovation/​media/​document/​bpci-advanced-alt-fs-my4-sepsis
8.
Centers for Medicare & Medicaid Services. Hospital Value-Based Purchasing (VBP) Program. Centers for Medicare & Medicaid Services Accessed December 20, 2025. https:/​/​www.cms.gov/​medicare/​quality/​hospital-value-based-purchasing-vbp-program
9.
Cassara M, Schertzer K, Falk MJ, et al. Applying educational theory and best practices to solve common challenges of simulation-based procedural training in emergency medicine. AEM Educ Train. 2019;4(Suppl 1):S22-S39. doi:10.1002/​aet2.10418
Google Scholar
10.
Sartania N, Sneddon S, Boyle JG, McQuarrie E, de Koning HP. Increasing collaborative discussion in case-based learning improves student engagement and knowledge acquisition. Med Sci Educ. 2022;32(5):1055-1064. doi:10.1007/​s40670-022-01614-w
Google Scholar
11.
Varma B, Karuveettil V, Fernandez R, et al. Effectiveness of case-based learning in comparison to alternate learning methods on learning competencies and student satisfaction among healthcare professional students: A systematic review. J Educ Health Promot. 2025;14:76. doi:10.4103/​jehp.jehp_510_24
Google Scholar
12.
McLean SF. Case-based learning and its application in medical and health-care fields: a review of worldwide literature. J Med Educ Curric Dev. 2016;3:JMECD.S20377. doi:10.4137/​JMECD.S20377
Google Scholar
13.
Kaur G, Rehncy J, Kahal KS, et al. Case-based learning as an effective tool in teaching pharmacology to undergraduate medical students in a large group setting. J Med Educ Curric Dev. 2020;7:2382120520920640. doi:10.1177/​2382120520920640
Google Scholar
14.
Xu O, Elman MR, DeVane K, Henderson K, Gonzalez M, Manella H. Psychiatric Care Checklist: A Novel Aid to Improve Psychiatric Care in the Emergency Department. J Acad Consult Liaison Psychiatry. 2026;67(1):4-13. doi:10.1016/​j.jaclp.2025.08.013. PMID:40902729
Google ScholarPubMed
15.
Dubosh NM, Carney D, Fisher J, Tibbles CD. Implementation of an emergency department sign-out checklist improves transfer of information at shift change. J Emerg Med. 2014;47(5):580-585. doi:10.1016/​j.jemermed.2014.06.017
Google Scholar
16.
Croskerry P. Cognitive forcing strategies in clinical decision making. Ann Emerg Med. 2003;41(1):110-120. doi:10.1067/​mem.2003.22
Google Scholar
17.
Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME guide no. 7. Med Teach. 2006;28(2):117-128. doi:10.1080/​01421590600622665
Google Scholar
18.
Smith SR, Bakshi R. Promoting resident involvement in quality improvement initiatives through faculty involvement and curriculum. J Grad Med Educ. 2015;7(1):119-120. doi:10.4300/​JGME-D-14-00508.1
Google Scholar
19.
Wright R, Howell E, Landis R, Wright S, Kisuule F, Minter Jordan M. A case-based teaching module combined with audit and feedback to improve the quality of consultations. J Hosp Med. 2009;4(8):486-489. doi:10.1002/​jhm.532
Google Scholar
20.
Aronson L. Twelve tips for teaching reflection at all levels of medical education. Med Teach. 2011;33(3):200-205. doi:10.3109/​0142159X.2010.507714
Google Scholar
21.
Brady AK, Pradhan D. Learning without borders: asynchronous and distance learning in the age of COVID-19 and beyond. ATS Sch. 2020;1(3):233-242. doi:10.34197/​ats-scholar.2020-0046PS
Google Scholar
22.
Koller JP, Cochran KA, Headrick LA. Practical strategies to enhance resident engagement in clinical quality improvement. BMC Med Educ. 2022;22(1):96. doi:10.1186/​s12909-022-03134-y
Google Scholar
23.
Pronovost PJ, Hudson DW. Improving healthcare quality through organisational peer-to-peer assessment: lessons from the nuclear power industry. BMJ Qual Saf. 2012;21(10):872-875. doi:10.1136/​bmjqs-2011-000470
Google Scholar
24.
Rosenthal MA, Sharpe BA, Haber LA. Using peer feedback to promote clinical excellence in hospital medicine. J Gen Intern Med. 2020;35(12):3644-3649. doi:10.1007/​s11606-020-06235-w
Google Scholar
25.
Akins RB, Handal GA. Utilizing quality improvement methods to improve patient care outcomes in a pediatric residency program. J Grad Med Educ. 2009;1(2):299-303. doi:10.4300/​JGME-D-09-00043.1
Google Scholar
26.
Goldstein S, des Bordes JKA, Neher S, Rianon N. Improving patient safety through mandatory quality improvement education in a family medicine residency programme. BMJ Open Qual. 2024;13(1):e002623. doi:10.1136/​bmjoq-2023-002623
Google Scholar
27.
Frank HE, Evans L, Phillips G, et al. Assessment of implementation methods in sepsis: study protocol for a cluster-randomized hybrid type 2 trial. Trials. 2023;24(1):620. doi:10.1186/​s13063-023-07644-y
Google Scholar

Appendix 1. Emergency Department Sepsis Case Review Checklist

Section 1. Recognition and Clinical Assessment in the Emergency Department

  1. Did the patient meet SIRS criteria in the Emergency Department?

    a. Comments:

    b. Options: Yes / No / Not applicable

  2. Did the patient meet sepsis criteria in the Emergency Department?

    a. Comments:

    b. Options: Yes / No / Not applicable

  3. Did the patient meet severe sepsis (sepsis with acute organ dysfunction) criteria in the Emergency Department?

    a. Comments:

    b. Options: Yes / No / Not applicable

  4. Did the patient meet septic shock criteria in the Emergency Department?

    a. Comments:

    b. Options: Yes / No / Not applicable

Section 2. Evaluation of Clinical Management of Severe Sepsis and Septic Shock

  1. Was an initial lactate level obtained?

    a. Comments:

    b. Options: Yes / No / Not applicable

  2. Were blood cultures obtained within 3 hours of meeting severe sepsis or septic shock criteria?

    a. Comments:

    b. Options: Yes / No / Not applicable

  3. Did the patient receive appropriate broad-spectrum antibiotics within 3 hours of meeting criteria?

    a. Comments:

    b. Options: Yes / No / Not applicable

  4. Did the patient receive 30 mL/kg crystalloid fluids or have a documented exemption?

    a. Comments:

    b. Options: Yes / No / Not applicable

  5. Did the patient receive vasopressors after fluid resuscitation for septic shock?

    a. Comments:

    b. Options: Yes / No / Not applicable

  6. Was a repeat lactate obtained within 6 hours if initial lactate > 2 mmol/L?

    a. Comments:

    b. Options: Yes / No / Not applicable

Section 3. Overall Care Team Assessment

  1. Based on the review, did the patient receive guideline-concordant care in the Emergency Department?

    a. Comments:

    b. Options: Yes / No / Not applicable


  1. The department also participates in broader multicenter quality-improvement initiatives, including the Assessment of Implementation of Methods in Sepsis and Respiratory Failure (AIM-SRF) trial.27