Facilitator-led debriefing is commonplace in simulation-based education and has been extensively researched. In contrast, self-led debriefing is an emerging field that may yet provide an effective alternative to well-established debriefing practices. The term ‘self-led debriefing’, however, is often used across a variety of heterogeneous practices in a range of contexts, leading to difficulties in expanding the evidence base for this practice. Evidence, specifically exploring in-person group self-led debriefings in the context of immersive simulation-based education, is yet to be appropriately synthesized. This protocol explains the rationale for conducting an integrative review of this topic whilst summarizing and critiquing the key steps of the process.
The aim of this integrative review is to systematically search, analyse and synthesize relevant literature to answer the following research question: With comparison to facilitator-led debriefings, how and why do in-person self-led debriefings influence debriefing outcomes for groups of learners in immersive simulation-based education?
This is a protocol to conduct an integrative review aligned to Whittemore and Kanfl’s established five-step framework. The protocol fully addresses the first two steps of this framework, namely the problem identification and literature search stages. Seven databases (PubMed, Cochrane, EMBASE, ERIC, SCOPUS, CINAHL Plus and PsycINFO) will be searched comprehensively to optimize both the sensitivity and precision of the search in order to effectively answer the research question. It also outlines and appraises the various procedures that will be undertaken in the data evaluation, analysis and presentation stages of the process.
This review will attempt to address a gap in the literature concerning self-led debriefing in immersive simulation-based education, as well as identify areas for future research. Integrative reviews aim to provide a deeper understanding of complex phenomena and we detail a comprehensive explanation and justification of the rigorous processes involved in conducting such a review. Finally, this protocol highlights the applicability and relevance of integrative reviews for simulation-based education scholarship in a wider context.
Simulation-based education (SBE) has become a widely adopted educational technique across healthcare disciplines. Many experts now broadly agree that the discourse surrounding research in SBE has moved on from ‘does SBE work?’ to an assessment of the contexts and conditions under which it is best employed and most effective, and why this may be the case [1,2]. SBE has wide-ranging applications, from learning complex technical skills to practising and consolidating behavioural skills as part of a multidisciplinary team . Immersive SBE deeply engages learners’ senses and emotions through an environment that offers physical and psychological fidelity, thus allowing for the conceptualization of their perception of realism . The concept of immersion relates specifically to the subjective impression of learners that they are participating in an activity as if it were a real-world experience . Many simulated learning events (SLEs) achieve this by having learners interact with either computerized manikins or simulated patients to work through scenario-based simulations. This is commonly followed by structured facilitated debriefings, during which scenarios are reviewed and learner experiences are reflected upon, explored and analysed, such that meaningful learning can be derived and consolidated .
Debriefing has been defined as a ‘discussion between two or more individuals in which aspects of performance are explored and analysed’ (p. 658)  and is commonly cited as one of the most important aspects of learning in immersive SBE [7,8]. Debriefings should provide a psychologically safe environment for learners to actively reflect on actions and assimilate new information with previously constructed knowledge, thus resulting in developing strategies for future improvement within their own real-world context [5–7]. They are typically led by facilitators guiding conversations to ensure content relevance and successful achievement of intended learning outcomes . Evidence suggests that the quality of debriefing is highly dependent on the skills and expertise of the facilitator and the establishment of a safe psychological space for learning [10–12]. Indeed, some commentators claim that facilitator skill is the strongest independent predictor of successful learning . However, this interpretation has been challenged, with self-led debriefings (SLDs) being presented as potentially effective alternatives [13,14].
Several literature reviews have reported on SLDs in comparison to facilitator-led debriefings (FLDs) within the umbrella of debriefing effectiveness more generally [6–9,11,14–18]. The consensus is that there is limited evidence of superiority of one approach over the other. However, the broad scope of the reviews limits authors’ abilities to critically appraise the evidence, with sufficient detail relating to SLDs lacking. To our knowledge, only one published review has specifically investigated SLDs’ impact on debriefing effectiveness . Two questions were asked in this review: (1) what are the characteristics of self-debriefs used in healthcare simulation? and (2) to what extent do self-debriefs found in the literature align with the standards of best practice for debriefing? Whilst concluding equivalent outcomes for well-designed SLDs and FLDs, these findings included virtual settings and were limited to individual learner debriefings only. The value and place of in-person SLDs for groups of learners post-immersive SLEs, either in isolation or in comparison with FLDs, therefore warrants dedicated exploration.
SLDs are a relatively new concept offering a potential alternative to well-established FLDs. Their utility has important implications for SBE due to the resources required to support faculty development programmes [7,10]. Evidence is emerging regarding how and why in-person SLDs influence debriefing outcomes for groups of learners in immersive SBE, but that evidence is yet to be appropriately synthesized. This integrative review (IR) aims to address this current gap in the evidence base, thereby informing simulation-based educators of best practices moving forward in immersive SBE, whilst highlighting gaps for further research.
There is currently no consensus definition for SLDs within the literature, and as such the term encompasses a wide variety of heterogeneous practices, leading to a confusing narrative for commentators to navigate as they report on debriefing practices. To ensure clarity for the purposes of this study, we have refined the definition of ‘self-led debriefing’ to describe debriefings that occur without the immediate presence of a trained faculty member, such that the debriefing is conducted by the learners themselves.
Alignment between the research paradigm, research question and methodology is vital for conducting high-quality research and is often only invoked at a superficial level in health professions education (HPE) research . To ensure such alignment, we have chosen to undertake an IR to answer the research question. This approach will fulfil the need for new insights and innovative approaches to SBE, such that the nature of science with which we engage and the subsequent knowledge formation it generates are not constricted .
Whilst systematic reviews may have traditionally been viewed as a gold-standard review type, this perspective is now regularly challenged, especially within HPE research . An IR is one that integrates findings from studies with diverse and differing designs, in which both quantitative and qualitative, and in some cases theoretical, data sets are examined in a rigorous systematic manner, thus aiming to provide a more comprehensive understanding of a particular phenomenon [22–25]. Such an approach is particularly pertinent in SBE, with researchers employing wide-ranging study designs to examine complex phenomena from a variety of differing perspectives and paradigms. Legitimate concerns note that the inherent complexity of integrating findings from diverse sources and study designs can lead to inaccuracies, biases and a lack of rigour . Such concerns have led to alternative attempts to develop frameworks with which to conduct IRs in a manner that enhances the methodological rigour of the work [22–25,27,28].
This IR protocol is aligned with Whittmore and Knafl’s  framework (Figure 1). To ensure methodological rigour, their five-step process is explicitly framed, structured, protocolized and documented . Multiple publications describe various modifications of this framework [23–25,27,28]. However, the original version allows flexibility to sub-categorize different elements of the methods and processes that suit specific elements of this IR.
This protocol fully addresses the first two steps of Whittemore and Knafl’s  framework. It then outlines the various processes that will be undertaken in steps three through five, detailing several facets of the evaluation, analysis and presentation stages. By documenting this process in detail, we aim to inform and inspire other SBE researchers to utilize this hither-to underemployed review method in their own work.
This study is rooted in constructivism and constructionism with important elements originating from both perspectives. Constructivism is a paradigm in which individuals socially construct concepts, models and schemas to develop personal meanings whilst making sense of the world and reality from their subjective experiences . Constructionism espouses the deep impact that culture and society have on shaping how such subjective experiences impact an individual’s formulation of meaning within the world, or context, they reside in, thereby shaping one’s ongoing thoughts and behaviours . Therefore, in the context of immersive SBE, we reject the notion of ‘one objective reality’, instead believing the presence of multiple subjective realities that are constructed by individuals or groups. Participant experiences influence their view of reality and therefore different meanings may be derived from the same nominal experience. Furthermore, construction of meaning may be influenced and shaped by the presence or absence of facilitators within debriefings. In this IR, by applying theory to interpret data already collected and analysed by other parties, we will use theory inductively to formulate a new understanding and interpretation of the evidence already available. Several theories, such as experiential learning , transformative learning theory , situated learning theory  and social constructivist theory , inform learning in immersive SBE contexts, and will be drawn upon in the analysis of the findings in this IR to explore how and why in-person SLDs influence debriefing outcomes for groups of learners.
The aim of this IR is to systematically search, analyse and synthesize relevant literature to explore in-person SLDs in immersive SBE for groups of learners. Emerging from this aim, the overarching research question is: With comparison to facilitator-led debriefings, how and why do in-person self-led debriefings influence debriefing outcomes for groups of learners in immersive simulation-based education?
To formulate the research question, we deconstructed its various parts, initially comparing both PICOS (Population, Intervention, Comparison, Outcome, Study design)  and SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type)  frameworks (Table 1). We chose the PICOS framework as the inclusion of a comparator suits this study, whilst simultaneously safeguarding the maintenance of the integrative study design to ensure a range of diverse quantitative and qualitative studies can be identified. Whilst such an approach may derive from positivist disciplines, their use can extend to other paradigms . However, there remains dubiety as to whether these frameworks are too simplistic to aid in formulating integrative questions which require to search a cross-section of study designs to adequately harness the complexity of contexts specifically involved in HPE research .
|PICOS Framework||Methley et al. ||SPIDER Framework||Cooke et al. |
|Population||In-person immersive SBE debriefing participants||Sample||In-person immersive SBE debriefing participants|
|Intervention/interest||Self-led debriefings||Phenomenon of interest||Self-led debriefings|
|Comparison/context||Facilitator- or instructor-led debriefings||Design||Integrative|
|Outcome||Any outcomes||Evaluation||Any outcomes|
|Study design||Integrative: both quantitative and qualitative studies included||Research type||Integrative: both quantitative and qualitative studies included|
The search strategy in any knowledge synthesis is critical to ensure that relevant literature within the scope of the study is identified, thus minimizing bias and enhancing methodological rigour [22,39]. It should be clearly documented and transparent such that readers are able to reproduce the search themselves with the same results [28,40]. We sought the expertise of a librarian to help ensure a focused, appropriate and rigorous search strategy [20,28,40,41].
We iteratively designed an extensive and broad search strategy that optimizes both the sensitivity and precision of the search, thereby ensuring that relevant literature pertaining to the review is identified whilst avoiding non-relevant studies . By framing the keywords articulated within the research question, along with their associated potential synonyms, into the PICOS framework, we constructed a logic grid to document the key search terms (Table 2) . To ensure these terms are comprehensive, we have analysed and incorporated free-text words, keywords and index terms of relevant articles identified during a preliminary scoping literature search. These terms were then employed in pilot iterative searches of the PubMed database, and based upon preliminary results, have been subsequently refined to formulate the finalized list of relevant, inclusive and precise search terms (Table 2). Lefebvre et al.  assert that whilst research questions often articulate specific comparators and outcomes, this is not always replicated in the titles or abstracts of articles and hence not well indexed. They, therefore, recommend that a search strategy typically encompass terms from three categories of the PICOS framework as opposed to all five :
|PICOS framework category||Key search terms|
|Population/problem/setting||Simulation training [MeSH], Simulation-based, Simulation-enhanced, Simulation training, Simulation teaching, Simulation event, Immersion, Simulation, Simul*, Debrief*, Conversation*|
|Intervention||Self-led, Peer-led, Group-led, Participant-led, Student-led, self-directed, Student-directed, Self-guided, Self-facilitated, Peer-facilitated, Group-facilitated, Student-facilitated, Self-debrief*, Peer-debrief*, Group-debrief*, Self debrief*, Peer debrief*, Group debrief*, Within-team|
|Comparison||Facilitator-led, Instructor-led, Faculty-led, Instructor debrief*, Facilitated|
Due to the well-established practice of FLDs within SBE , we have modified this guidance by choosing to include FLDs as a comparator term in the search strategy. Furthermore, in omitting outcome terms, we have chosen to forgo specifying types of study design, as by definition, IRs encourage the incorporation of diverse study methodologies . The final search strategy used in the PubMed search can be found in Table 3. The search strategy will be customized to accommodate the characteristics of each specific database.
|#1||Simulation training [MeSH] OR simulation-based OR simulation-enhanced OR “simulation training” OR “simulation teaching” OR “simulation event” OR (immersion AND simulation).||26,658|
|#2||(Facilitator-led OR Instructor-led OR Faculty-led OR “Instructor debrief*” OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR “Self debrief*” OR “Peer debrief*” OR “Group debrief*” OR Within-team)||659,522|
|#3||Debrief* OR Conversation*||31,513|
|#4||(#2 AND #3): ((Facilitator-led OR Instructor-led OR Faculty-led OR "Instructor debrief*" OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR "Self debrief*" OR "Peer debrief*" OR "Group debrief*" OR Within-team)) AND (Debrief* OR Conversation*)||3,795|
|#5||(#1 AND #4): (((Facilitator-led OR Instructor-led OR Faculty-led OR "Instructor debrief*" OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR "Self debrief*" OR "Peer debrief*" OR "Group debrief*" OR Within-team)) AND (Debrief* OR Conversation*)) AND (Simulation training [MeSH] OR simulation-based OR simulation-enhanced OR "simulation training" OR "simulation teaching" OR "simulation event" OR (immersion AND simulation))||381|
Searching for one database is inadequate and may lead to identifying a small and unrepresentative selection of relevant studies [28,41]. Therefore, guided by the research question, researchers should select several electronic bibliographic databases that are aligned with specific scientific and academic fields . Whilst there is no consensus as to what constitutes an acceptable number of databases searched, it is not necessarily the number of databases searched that should be questioned, but rather which databases and why ? Taking these factors into consideration, to ensure our search strategy best suits our research topic, we will search seven key electronic bibliographic databases: PubMed, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, ERIC, SCOPUS, CINAHL Plus and PsycINFO. Furthermore, we will conduct supplementary manual searches of reference lists from relevant studies identified via the search strategy and a manual search of relevant SBE internet resources, such as healthysimulation.com, ResearchGate and Google Scholar, to avoid missing key studies.
We will use the bibliographical software package EndNoteTM 20 to organize search results due to its functionality, familiarity and ability to directly communicate with and retrieve references from the databases being searched . Storage of the search strategies and methods will ensure a transparent and auditable process that would be available for external review.
Inclusion and exclusion criteria allow researchers to refine their searches to locate specific data that explicitly answers the research question , manage the volume of research generated and focus their reviews more reliably . However, such criteria may introduce implicit and explicit biases into the search results . For example, by only including peer-reviewed published studies, this IR will potentially be confounded by publication bias, whereby its findings may be skewed by the types of studies or associated reported findings that are more likely to be published compared with types of studies or results that are deemed to be less important or valid . Furthermore, non-peer-reviewed grey literature such as essays, commentaries, editorials, letters, blogs, conference abstracts, theses and course evaluations will all be omitted, despite being potentially relevant to the results of the review . Importantly, whilst the peer-review process is being increasingly questioned by academics across scientific disciplines , it provides quality assurance and scrutiny for academic work and remains a cornerstone in scientific publishing . It thus remains part of the inclusion criteria. Finally, we have excluded studies examining non-immersive SLEs and studies examining debriefings that were either virtual, related to clinical events or included only one learner, as their findings may not be applicable to the contexts described in our research question. Having carefully considered the advantages, disadvantages, logistics and practicalities of these matters in relation to our research question, the criteria presented in Table 4 will be applied to the search strategy.
|Inclusion criteria||Exclusion criteria|
|Original empirical research||Non-empirical research|
|Live in-person SLEs and debriefing studies||Virtual/online/tele-simulation and& debriefing studies|
|Immersive SLE debriefings||Non-immersive SLE debriefings (e.g. mastery learning workshops and procedural skills SLEs)|
|Debriefings including more than one learner/participant (excluding faculty)||Debriefings only involving only one learner/participant (excluding faculty)|
|Peer-reviewed studies||Non peer-reviewed studies|
|Studies reported in English||Studies reported in a language other than English|
|Studies describing SLDs with or without inclusion of or comparison to FLDs||Studies describing FLDs exclusively or comparing FLDs to no debriefing|
|Healthcare professional or student participants||Non-healthcare professional or student participants|
|Any date||Grey literature (including doctoral theses or dissertations, conference or poster abstracts, opinion or commentary pieces, letters, websites, blogs, instruction manuals and policy documents)|
|Clinical event debriefing|
Documenting the search process ensures transparency such that the strategy should be reproducible by any reader . It allows readers to gauge how extensive the search was or conversely, if there were gaps in the process . Whilst guidelines differ, the consensus remains that databases and interfaces used to conduct the searches, explicit search strategies and use of limits or inclusion and exclusion criteria, should be documented in some format . To adhere to such standards, many authors visually present these practices via the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA) reporting tool . Due to its simplistic style and familiarity for fellow SBE researchers, we will use the PRISMA flow chart to document and present the search process in this review .
Studies identified through the search strategy should undergo a quality assessment (QA) process. This identifies the methodological qualities and risk of bias within studies, their individual contribution, weighting and interpretation in the data analysis stage of a review [19,22,28]. Bias within the study design, conduct and analysis may impact the validity of any results . There are several established QA tools such as Critical Appraisal Skills Programme checklists , Joanna Briggs Institute critical appraisal checklists  and the Mixed Methods Appraisal Tool (MMAT) .
Within IRs, the QA process is complex due to the potential for a diverse range of empirical study designs being assessed, with each type of design generally necessitating differing criteria to demonstrate quality. Furthermore, there is no established standard of ‘quality measure’, and the sampling frame of the study dictates the method of how quality is judged . We chose the MMAT tool because, in the context of this complexity, it aligns itself well with IRs as it details distinct criteria specifically tailored across five study designs: qualitative, quantitative randomized controlled trials, quantitative non-randomized controlled trials, quantitative descriptive and mixed methods .
The hierarchy of evidence is an established concept in evidence-based medicine and HPE scholarship and should be considered when reviewing and appraising literature . It proposes that certain types of study design, and by extension the results they present, are deemed ‘better’ or ‘stronger’ than others . The ranking system, often presented as a pyramid, is mainly based upon the probability of bias, with studies having the least risk of systematic errors ranked highest . The concept has been adapted with relation to SBE scholarship, splitting different study designs into filtered and unfiltered information, with evidence syntheses and clinical guidelines being highlighted as a separate category ranked just below systematic reviews . However, these hierarchical models can be overly simplistic and are not necessarily the most appropriate choice for evaluating evidence and best practices in HPE . First, the notion of a hierarchy of evidence assumes that all studies are conducted, within their design parameters, to the same high standard. This is not always the case. Second, especially in HPE research, the best available evidence to answer specific research questions may indeed be from alternative study designs . Pilcher and Bedford  propose a more integrated and contemporary model which recognizes the value of differing study designs and places the goal of evidence-based education at its centre. Recognizing the value of multiple sources and study designs is the core ethos of an IR, and whilst not specific to HPE scholarship, Pilcher and Bedford’s  model usefully integrates this notion with the convention of hierarchy of evidence levels. Using their model in conjunction with the MMAT quality assessment tool will allow for the identification and interpretation of the best available evidence to answer our specific research question.
The data analysis stage of an IR includes the processing, ordering, categorizing and summarizing of data from primary sources, with the overall aim of synthesizing information such that a unified and integrated conclusion can be made . Whittemore and Knafl  emphasize the importance of a systematic analytic approach to this endeavour to minimize bias and the chance of error. They advocate using a four-phase constant comparison method originally described for qualitative data analysis . With this method, data are compared item by item and categorized and grouped together, before further comparison between different groups allows for an analytical synthesis of the varied data. These phases will be applied to this IR and are detailed below.
Data reduction involves creating a classification system to manage data from diverse methodologies . In this IR, the studies will be classified according to their study design, as per the MMAT tool to allow logical analysis, ease of review and comparison. A standardized data extraction tool ensures consistency, minimizes bias and allows easy comparison of primary sources on specific variables, demographics, data, interpretations and key findings [22,28]. Li et al.  propose a four-stage approach to data reduction: develop outlines of tables and figures expected to appear in the review, assemble and group data elements, identify optimal ways to frame the data points, and pilot and review the forms to ensure data are presented and structured correctly. We followed this approach to formulate a data extraction tool which will be applied consistently across multiple study designs (Table 5).
|Study title and citation|
|Study aim and objectives|
|SLE activity description|
|SLD activity description|
|Comparator description (if applicable)|
|Conclusions/limitations and weaknesses/comments|
|Limitations and weaknesses|
Extracted data need assembled and displayed, such that relationships across primary sources can be visualized and analysed . In the case of this IR, the extracted data will be displayed in tabular form.
This stage involves an iterative process of scrutinizing data displays of primary sources such that patterns, themes or relationships can be identified . This requires researchers to move beyond simply summarizing study findings, instead forming new perspectives and understandings of a topic or phenomena whilst formulating questions that can guide further research into gaps in the literature .
Whilst most commonly used in qualitative data analysis, thematic analysis is a collection of techniques also regularly used in integrative research to achieve such aims [22,58,59]. It is defined as a flexible and practical method to identify, analyse and report emerging themes within data that can be rich, powerful and complex . Reflexive thematic analysis (RTA) refers to an approach in which thematic analysis is fully conceptualized and underpinned by qualitative paradigms, whereby the researcher, through their reflexive interpretative analysis of the patterns of data and their meanings, has an active and central role in knowledge formulation . In this approach, it is accepted, even expected, that two researchers may derive differing interpretations from the same nominal data set, and as such this approach shuns any notion of positivistic data interpretation . To this end, researchers should embrace subjectivity, creativity and reflexivity in the research process [22,60]. To demonstrate reflexivity, we will be transparent in how we critically interrogate our engagement with the research process such that readers can assess how our assumptions and perspectives as simulation educators who engage primarily in FLDs may influence the analysis and interpretation of data . We will build reflexivity into our process using strategies such as reflexive journaling, peer debriefing and reflection on positionality . Contrary to codebook approaches where themes are often predefined prior to coding, in RTA, themes should be produced organically and be thought of as the final outcome of data coding that the researcher has interpreted from the data [60,61]. Braun and Clarke’s  framework provides a sound methodological, practical and theoretically flexible method with which to analyse the data that will be extracted in this IR. The framework includes the following six phases:
Whilst these phases are ordered sequentially, analysis can occur as a recursive process in which researchers move back and forth between the different stages as insights deepen . Themes do not simply emerge from the data, but are constructed by the researcher as they analyse, compare and map the codes and are best conceptualized as output of the analytic process that researchers undertake . In this IR, we will follow this six-step process to develop the final themes.
This final phase in an IR is the synthesis of important elements from each subgroup analysis to form an integrated summation of the topic or phenomenon. This involves moving from the interpretative phase of the patterns, themes and relationships to higher levels of abstract and conceptual processing [22,56]. Conclusions and conceptual models can then be developed and modified to ensure that they are inclusive of as much primary data as possible, but do not exceed the evidence from which they are drawn [22,56]. The review conclusions should be explained within the context of the review parameters and limitations , but may be difficult to delineate and explain if there is conflicting evidence from the primary studies .
The final step in the review process is the presentation of data which, depending on the researcher’s preferences and the type of data, conclusions and interpretations formed, can occur in a variety of formats . The results of this IR will be presented in a combination of tables, thematic maps and prose, thus ensuring the breadth and depth of the topic are captured . An accompanying narrative summary will ensure that the results, interpretations and conclusions are aligned with the research question. Finally, the limitations of the study will be acknowledged and gaps for further research will be identified and documented.
SLDs are a relatively new concept that may offer an effective alternative debriefing experience to established FLD practices. The evidence regarding how and why they influence debriefing outcomes for groups of learners in immersive SBE is yet to be appropriately synthesized. This IR will attempt to address this gap as well as identify areas for future research. The purpose of this protocol is to detail, explain and justify the underlying rationale for performing an IR, and report and critically appraise the specific elements of the process in relation to our research question. Finally, through this protocol, we hope to have highlighted the applicability and relevance of IRs for SBE scholarship in a wider context.
We would like to thank Scott McGregor, University of Dundee librarian, for his invaluable help in formulating and refining the search strategy employed in this review. We would also like to thank doctors Kathleen Collins, Kathryn Sharp and Michael Basler for their critical eyes when reviewing this work and manuscript.
PK led the conception and design of this protocol as part of his Masters in Medical Education (University of Dundee) research project. SS supervised the development of the protocol. Both authors contributed to the writing and editing of this article and have reviewed and approved the final manuscript.
No funding declared.
Data supporting findings in this protocol are available within the article or on special request from the lead author, Dr Prashant Kumar.
All authors give consent for this manuscript to be published.
No conflicts of interest declared.
PK currently works as an anaesthetic registrar in NHS Greater Glasgow & Clyde and has previously completed a 2-year clinical simulation fellowship. He has extensive experience of debriefing within immersive simulation in both undergraduate and postgraduate settings. He has a specialist interest in debriefing practices, interprofessional simulation-based education and faculty development.
SS currently works as a Senior Lecturer in Simulation for Health Professions Education, Dundee Institute for Healthcare Simulation, School of Medicine, University of Dundee. She is interested in flexible and blended learning in postgraduate health-professions education. She has over 20 years teaching experience in undergraduate clinical skills education, post-graduate medical education and simulation-based faculty development which includes many international collaborations.