Home Article
International Journal of Healthcare Simulation
image
Defining low- and high-fidelity simulation in systematic reviews: the case of heart auscultation simulators

DOI:10.54531/fpkb4904, Pages: 1-2
Article Type: Letter, Article History

Table of Contents

Highlights

Notes

Halfwerk, Duinmeijer, Haumann, and Arens: Defining low- and high-fidelity simulation in systematic reviews: the case of heart auscultation simulators

Dear Editor-in-Chief,

With great interest we read the systematic review and meta-analysis by Osborne et al. on the effectiveness of high- and low-fidelity simulation-based medical education in teaching cardiac auscultation [1]. We congratulate the authors for their efforts to provide a systematic review on simulation-based education. While the authors conclude that high-fidelity simulation has no benefit in improving cardiac auscultation knowledge or skills compared with low-fidelity simulation, we believe that this conclusion cannot be supported by the authors’ work.

Randomized controlled trials (RCTs) are scarce in simulation-based education. Therefore, allocating an RCT to a correct meta-analysis fidelity group should be performed as objectively as possible, that is with a thorough definition of low-and high-fidelity. Unfortunately, definitions of low and high fidelity as stated in the Healthcare Simulation Dictionary [2] or the International Nursing Association of Clinical and Simulation Learning (INACSL) standards [3] were not used by the authors. High-fidelity simulation can be defined as ‘Simulation experiences that are extremely realistic and provide a high level of interactivity and realism for the learner [3]. It can apply to any mode or method of simulation; for example: human, manikin, task trainer, or virtual reality’ [2], and low-fidelity simulation as ‘Not needing to be controlled or programmed externally for the learner to participate; examples include case studies, role playing, or task trainers used to support students or professionals in learning a clinical situation or practice’ [2]. We were curious as to why the authors did not adopt the aforementioned dictionary definitions. If the authors adopted these definitions, or used another objective classification method, the selection of RCTs into the correct fidelity group might have been appropriate.

The authors show a high level of heterogeneity (I2 > 85%) between the selected studies. Heterogeneity can be explained by including multiple professional groups (first- to last-year medical students, residents, nurse practitioners), a wide range of skill sets, and multiple assessment tools and simulators (audio only, Observed Structured Clinical Examination, volunteers, real cardiac patients). It is also unclear if the assessors were trained in objective assessment of skills, which impacts the reliability of the selected studies. Plotting these studies in a funnel plot (Figure 1) indeed shows asymmetry, with large studies with smaller standard deviations being absent, and making publication bias probable. Furthermore, all studies included in the meta-analysis of high-versus low-fidelity are heavily underpowered.

Funnel plot showing high heterogeneity of selected studies comparing high- to low-fidelity cardiac auscultation simulators. Y-axis represents the standard error of the means found in the individual studies; X-axis represents the effects size measured in mean difference. Yellow dots represent the individual skill studies, vermillion dots the individual knowledge studies. Significance contours at the 0.05 (blue), 0.025 (sky blue) and 0.01 (green) levels are indicated, as well as the random and fixed effects models (dotted lines). (Data from Osborne et al. [1].).
Figure 1:

Funnel plot showing high heterogeneity of selected studies comparing high- to low-fidelity cardiac auscultation simulators. Y-axis represents the standard error of the means found in the individual studies; X-axis represents the effects size measured in mean difference. Yellow dots represent the individual skill studies, vermillion dots the individual knowledge studies. Significance contours at the 0.05 (blue), 0.025 (sky blue) and 0.01 (green) levels are indicated, as well as the random and fixed effects models (dotted lines). (Data from Osborne et al. [1].).

Two questions remain to be answered with regard to the authors’ work:

  1. Can we allocate studies more objectively into a low- or high-fidelity category?
    • Recently, our group classified extracorporeal membrane oxygenation simulators and simulations (ECMO sims) [4]. This classification, based on overall ECMO sim fidelity, was established by taking the median of definition-based fidelity, component fidelity and customization fidelity as determined by expert opinion. Selecting and combining these fidelity metrics often changed the classification of the respective ECMO sims. Therefore, we anticipate that a more objective classification method would likely lead to reclassifying the selected studies in this systematic review on heart auscultation simulators and possibly allow a different conclusion. We recommend using a more objective classification method for systematic reviews on simulation-based training in the future.
  2. Is a high-fidelity simulator necessary for training basic skills such as cardiac auscultation?
    • A low-fidelity task trainer is likely to be sufficient for novice training of skill acquisition of cardiac auscultation, as a zone 0 to 1 simulation (SimZone 0 to 1) as defined by Roussin and Weinstock [5]. Adding complexity, such as team training, distraction, interrupted actions and real-life training with debriefing (SimZones 3 to 4), is more likely to benefit from high-fidelity medical simulators.

To conclude, we compliment the authors on their efforts to increase the level of evidence for effectiveness of simulation-based medical training. However, future work should allocate studies as objectively as possible to low-, mid- or high-fidelity categories. Furthermore, studies should be compared with similar skill entry levels and complexity of simulation.

Declarations

Authors’ contributions

Conception and design: FRH; analysis: RH and FRH; interpretation of data: FRH, WCD, RH, JA; drafting and revising: FRH, WCD, RH, JA; final approval: FRH, WCD, RH, JA. All authors are accountable for all aspects of the work, and ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Funding

This letter did not receive funding support.

Availability of data and materials

Data were obtained from Osborne et al. (Reference 1), and the original underlying studies. Data underlying the funnel plot are available on request from the corresponding author.

Ethics approval and consent to participate

No ethics approval is required for this Letter.

Competing interests

All authors declare no conflict of interest.

References

1. 

Osborne C, Brown C, Mostafa A. Effectiveness of high- and low-fidelity simulation-based medical education in teaching cardiac auscultation: a systematic review and meta-analysis. International Journal of Healthcare Simulation. 2022;1(3):7584.

2. 

Lioce L, Lopreiato J, Downing D, et al. Healthcare simulation dictionary. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ). 2020.

3. 

INACSL Standards Committee. INACSL standards of best practice: simulation glossary. Clinical Simulation In Nursing. 2016;12:S39S47.

4. 

Duinmeijer WC, Fresiello L, Swol J, et al. Simulators and simulations for extracorporeal membrane oxygenation: an ECMO scoping review. Journal of Clinical Medicine. 2023;12(5):1765.

5. 

Roussin CJ, Weinstock PS. SimZones: an organizational innovation for simulation programs and centers. Academic Medicine 2017;92(8):11141120.