Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Evaluation of a Mobile Telesimulation Unit to Train Rural and Remote Practitioners on High-Acuity Low-Occurrence Procedures: Pilot Randomized Controlled Trial

Evaluation of a Mobile Telesimulation Unit to Train Rural and Remote Practitioners on High-Acuity... Background: The provision of acute medical care in rural and remote areas presents unique challenges for practitioners. Therefore, a tailored approach to training providers would prove beneficial. Although simulation-based medical education (SBME) has been shown to be effective, access to such training can be difficult and costly in rural and remote areas. Objective: The aim of this study was to evaluate the educational efficacy of simulation-based training of an acute care procedure delivered remotely, using a portable, self-contained unit outfitted with off-the-shelf and low-cost telecommunications equipment (mobile telesimulation unit, MTU), versus the traditional face-to-face approach. A conceptual framework based on a combination of Kirkpatrick’s Learning Evaluation Model and Miller’s Clinical Assessment Framework was used. Methods: A written procedural skills test was used to assess Miller’s learning level— knows —at 3 points in time: preinstruction, immediately postinstruction, and 1 week later. To assess procedural performance (shows how), participants were video recorded performing chest tube insertion before and after hands-on supervised training. A modified Objective Structured Assessment of Technical Skills (OSATS) checklist and a Global Rating Scale (GRS) of operative performance were used by a blinded rater to assess participants’ performance. Kirkpatrick’s reaction was measured through subject completion of a survey on satisfaction with the learning experiences and an evaluation of training. Results: A total of 69 medical students participated in the study. Students were randomly assigned to 1 of the following 3 groups: comparison (25/69, 36%), intervention (23/69, 33%), or control (21/69, 31%). For knows, as expected, no significant differences were found between the groups on written knowledge (posttest, P=.13). For shows how, no significant differences were found between the comparison and intervention groups on the procedural skills learning outcomes immediately after the training (OSATS checklist and GRS, P=1.00). However, significant differences were found for the control versus comparison groups (OSATS checklist, P<.001; GRS, P=.02) and the control versus intervention groups (OSATS checklist, P<.001; GRS, P=.01) on the pre- and postprocedural performance. For reaction, there were no statistically significant differences between the intervention and comparison groups on the satisfaction with learning items (P=.65 and P=.79) or the evaluation of the training (P=.79, P=.45, and P=.31). Conclusions: Our results demonstrate that simulation-based training delivered remotely, applying our MTU concept, can be an effective way to teach procedural skills. Participants trained remotely in the MTU had comparable learning outcomes (shows how) to those trained face-to-face. Both groups received statistically significant higher procedural performance scores than those in the control group. Participants in both instruction groups were equally satisfied with their learning and training (reaction). We http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 1 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al believe that mobile telesimulation could be an effective way of providing expert mentorship and overcoming a number of barriers to delivering SBME in rural and remote locations. (J Med Internet Res 2019;21(8):e14587) doi: 10.2196/14587 KEYWORDS medical education; distributed medical education; simulation training; emergency medicine; rural health; remote-facilitation; assessment; chest tubes Through an iterative design process, our multidisciplinary group Introduction has developed an MTU that explores many of the challenges to the delivery of SBME to rural and remote acute care Challenges Accessing Simulation-Based Medical practitioners. The intention is the deployment of the MTU at a Education rural or remote location that could house the skills training The provision of acute care in rural and remote areas presents session through communication with an off-site, skilled mentor. unique challenges. Skills related to high-acuity low-occurrence Such a deployment would provide trainees with the appropriate procedures and clinical encounters are particularly susceptible simulation equipment, a standardized training environment, and to degradation over time and are inadequately served through access to an experienced mentor to guide the training. To our on-the-job experience alone [1]. Therefore, a systematic knowledge, this is one of the few such units, which combines approach to training personnel for these procedures is required. telecommunication and mobile simulation to deliver such In recent years, an increasing proportion of this training has training. made use of simulation-based modalities. Simulation-based A rigorous, theory-based, iterative approach was followed to medical education (SBME) has been shown to be an effective develop the MTU and evaluate the acceptability and feasibility training approach because it can provide opportunities to of delivering training remotely using the unit. Details on the practice infrequently encountered procedures [2-5] without development of the MTU and training materials have been compromising patient safety [6]. However, SBME often takes published elsewhere [18-23]. place in urban centers, and it can be difficult for rural and remote acute care practitioners to access these centers because of The objective of this study was to compare the educational geographic, cost, and time constraints [7,8]. efficacy of face-to-face versus remote delivery of educational content with respect to learner’s perceptions and objective SBME delivered through technologies such as telesimulation assessment of procedural performance. and mobile simulation has been shown to be an effective means of training medical practitioners and has helped to address some Framework for Learning Assessment of the above constraints [4,7-17]. However, use of these This study uses a conceptual framework based on a combination technologies is accompanied by their own challenges. of Kirkpatrick’s Learning Evaluation Model [24] and Miller’s Telesimulation involves delivering SBME over the internet, but Clinical Assessment Framework [25] to guide the assessment effective delivery of telesimulation training can be limited if of the MTU. This model (Figure 1; adapted from Dubrowski et the trainees are unable to access simulation equipment or an al [26]) is based on the work of Moore et al [27] who developed efficient training setup. Mobile simulation can address a framework “of an ideal approach to planning and assessing constraints by delivering an immersive simulation environment continuing medical education that is focused on achieving in a purposefully designed unit. However, mobile simulation desired outcomes” (pg 3). The new model incorporates often involves bringing an expert to rural and remote sites to Kirkpatrick’s 4 levels, which represent a sequence of ways to facilitate the session. This can often prove to be quite expensive evaluate a program, with Miller’s assessment tools for each and prohibitive because of time constraints. level of competence. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 2 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 1. Framework for Learning Assessment, based on Kirkpatrick (left) and Miller (right). Adapted from Dubrowski et al [26]. The base of Kirkpatrick’s model relates to subject reaction, practitioners is of particular interest in the province, as 40% of measuring how participants react to or perceive program content. the population lives in rural areas, and the province has a There is no direct correlation of this feature to a level on Miller’s relatively small population (525,000) distributed across a large framework. The second level of the Kirkpatrick model, learning, geographic area (405,000 km ). Acute care is delivered at a corresponds to the bottom 3 levels of Miller’s framework variety of health centers and hospitals across the province. These (knows, knows how, and shows how), whereas the third level of sites are staffed by physicians, nurses, and nurse practitioners Kirkpatrick’s framework, behavior, is closely related to the top with varying levels of experience. Access to SBME of Miller’s framework, does. Finally, the top level of opportunities is often limited. Health Research Ethics Board of Kirkpatrick’s model, results, does not relate to Miller’s Memorial University of Newfoundland approved this study. framework. This study examines Kirkpatrick’s reaction and The MTU consists of an inflatable rapid deployment tent (Figure learning, consisting of knows and shows how. We do not 2), which is outfitted with portable technology necessary to examine knows how because of anticipated challenges of subject allow for 2-way communication between the trainees and the retention and expected loss to follow-up during the study. mentor: laptop with communications software, monitor, camera, Rather, we decided to measure the higher level shows how speaker, and microphone and a portable wireless internet hub. because we could evaluate the participants’ performance of the The mentor uses comparable software, a camera, speaker, and procedure during the study. We do not examine Kirkpatrick’s microphone to communicate with the trainees. Off-the-shelf behavior and, consequently, do not examine Miller’s does. We and low-cost equipment was used to keep the design of the also do not examine Miller’s results, as these are assessments MTU accessible and practical. Both the trainees and the mentor of practice in a clinical setting, and this study is limited to an would have similar simulation supplies and setup to enable experimental setting. This paper discusses the findings in efficient demonstration and instruction (Figures 3 and 4). Studies relation to Kirkpatrick’s reaction and learning (consisting of by Jewer et al provide more information on the MTU [18,21]. Miller’s knows and shows how) levels. The eventual goal was to deliver simulation-based training Methods remotely through the use of a self-contained vehicle outfitted with simulation equipment necessary for delivery of a number Research Setting of scenarios. However, for the purpose of our test-of-concept This study was conducted at Memorial University of approach, a portable and rapid deployment tent was used. Newfoundland. Training of rural and remote acute care http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 3 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 2. The mobile telesimulation unit rapid deployment tent. Figure 3. Overview of the setup for the mentor and the trainees in the mobile telesimulation unit. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 4 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 4. The interior of the mobile telesimulation unit demonstrating setup for procedural training. conducted before the training (pretest), after the training Study Design (posttest), and 1 week later (retention test). During the pretest, A randomized controlled trial design was followed. A total of participants completed a questionnaire on demographic 3 sessions were held to compare the learning outcomes of information, the number of times they performed or witnessed participants who received training remotely in the MTU versus a chest tube insertion before this session, their previous those who received the same training face-to-face. To minimize experience with SBME, and their previous experience with variables affecting study outcomes, face-to-face training sessions telemedicine. Next, participants completed a written procedural also took place in the MTU space. A control group (ie, received skills knowledge test on a number of chest tube no training) was included to show that the intervention group procedure-specific questions. The demographic questionnaire (ie, remote) was not inferior to the comparison group (ie, and the procedural skills knowledge test were written face-to-face), and that both instructional approaches are actually components used to assess whether there were differences in effective [28]. the baseline knowledge about the chest tube procedure within or between the groups at the start of the study. The procedural The sessions focused on teaching an important high-acuity skills knowledge test was also used to measure learning after low-occurrence procedure, chest tube insertion, using a the session. This corresponds to the knows level of learning. low-fidelity setup: 3D-printed ribs, secured to a plexiglass stand, These materials were reviewed by an experienced emergency covered with low-cost simulated skin, and subcutaneous tissue medicine physician to determine if differences existed. (Figure 4). The chest tube insertion was selected as a representative procedure because it is an essential skill in acute To measure shows how, during the pretest, participants were care settings requiring precision [29], and it is a multistep video recorded performing a chest tube insertion on a procedure amenable to objective scoring. The training sessions low-fidelity simulated model (Figure 6). A modified Objective were 20-min long and consisted of simulation-based training, Structured Assessment of Technical Skills (OSATS) checklist with deliberate hands-on practice and mentor feedback. and a Global Rating Scale (GRS) of operative performance were used to assess procedural performance [31]. Figure 5 depicts the flow of the study procedure. A week before the procedural session, participants were emailed presession After the training session, during the posttest, participants in information consisting of a Web-based New England Journal the intervention and comparison groups were asked to evaluate of Medicine video, demonstrating proper performance of the their satisfaction with learning and their evaluation of the procedure and important details about chest tube insertion training. This corresponds to Kirkpatrick’s reaction level of the including indications, contraindications, complications, and learning framework. Participants also completed the written necessary equipment [30]. This was to help ensure that procedural skills knowledge test again (ie, knows). All participants started with a similar base level of knowledge. participants were then once again video recorded performing a chest tube insertion (ie, shows how). Participants were randomly assigned to 1 of 3 groups: intervention, comparison, and control. Testing procedures were http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 5 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Furthermore, 1 week after the training session (retention test), and GRS to assess the participants’ performance on the video the participants completed a questionnaire on their experiences recordings. The reviewing physician was blinded to participants’ with the procedure in the past week. They also completed the identity and was unaware of the phase of the study (pretest, written procedural skills knowledge test again (ie, knows), and posttest, or retention test). Overall, 12% of the videos were they were video recorded for the third time performing a chest randomly selected for review by a second experienced tube insertion (ie, shows how). emergency medicine physician. The modified OSATS checklist and GRS scores were used as the primary indicators of learning An emergency medicine physician with 11 years of clinical outcomes (ie, shows how). emergency room experience used the modified OSATS checklist Figure 5. Study design. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 6 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 6. Setup used in the video recording of the chest tube procedure (A) and example of a completed chest tube insertion (B). enabled us to more clearly measure learning. Participants Participants provided informed consent before enrollment in the study. Medical students during their first and second year of training Measures (approximately 80 students per cohort) were invited to participate in the study. Participation was voluntary and was Learning—Knows limited by the number of slots available at a scheduled data To measure the knows dimension of learning, participants were collection time (Multimedia Appendix 1). These medical asked a set of chest tube procedure–specific questions (Textbox students were novices in the chest tube procedure, and using 1). such subjects with similar background knowledge and skills Textbox 1. Procedural skills knowledge test questions (possible score: 15). Name 3 indications for chest tube placement. Name 3 contraindications to chest tube placement. Name 4 potential complications of chest tube placement. Name 5 essential pieces of equipment for chest tube placement. of this study, 1 item of the scale was removed because it was Learning—Knows How not relevant to our training scenario (ie, item #9—a Pleur-evac Participants’ performance of the chest tube procedure was setup was not available to participants). The GRS is composed evaluated using a modified OSATS checklist to measure the of 9 items, each measuring a different aspect of operative knows how dimension of learning. The OSATS checklist was performance. Each item was graded on a 5-point Likert scale originally developed and validated to assess the performance from 1, poor performance, to 5, good performance (Figure 7). of multiple surgical procedures at different stations [31]. It has Again, for the purposes of this study, 2 items of the scale were since been used to assess the performance of a single surgical removed because they were not appropriate for the training procedure [32]. Research has demonstrated that the OSATS has scenario (removed items included use of assistants and high reliability and construct validity for measuring technical knowledge of instruments because there was no assistant in the abilities outside of the operating room [31]. study design and knowledge of instruments implied participants were asking for the right things or saying the right names, This study used a modified OSATS checklist and a GRS of something which was not part of the study design). Thus, the operative performance. The checklist consists of 10 items that maximum GRS score attainable is 35 points, and the minimum are scored as done correctly or not (Textbox 2). For the purposes is 7 points. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 7 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Textbox 2. Checklist for chest tube insertion (not done, incorrect=0; done, correct=1). Injects local anesthetic Cuts skin with scalpel to subcutaneous tissue plane (no scything) Uses blunt dissection to enter chest cavity Enters pleural space above rib Checks position with digit before inserting chest tube Inserts chest tube safely using Kelly at the tip of the tube Inserts correct length of chest tube into chest Secures chest tube to chest wall with silk or nylon Connects tube and secures to drainage system with tape Applies airtight dressing http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 8 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 7. Global Rating Scale of operative performance. adapted from the National League of Nursing (NLN) Student Reaction Satisfaction and Self-Confidence in Learning scales [33]. These To measure participants’ reactions to the training, participants NLN scales have been widely used and have been found to have in the remote and comparison groups were asked to evaluate sufficient reliability and validity to be used in education research the training by indicating whether they thought the MTU could [33,34]. play an important role in rural and remote medical training, how Data Analysis satisfied they were with their overall experience in the MTU, and if they would recommend the MTU approach to their Participants were assigned a unique identifier, and this was used colleagues. Participants were also asked to indicate their to anonymize the data before analysis with respect to their satisfaction with the learning experiences. These measures were training group. Data analysis was completed using SPSS version http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 9 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 25. Descriptive statistics were computed for the demographic For all tests, a P value less than .05 was considered statistically variables. significant. Learning—Knows Results Because our data did not enable us to use the parametric repeated measures analysis of variance to analyze the pretest, posttest, In total, 69 medical students participated in the study across the and retention written procedural skills tests, we created 2 new 3 different sessions (Table 1). Participants were randomly variables (pretest minus posttest score and posttest minus assigned to their study group: intervention, comparison, or retention test score). The Kruskal-Wallis test (nonparametric control groups. equivalent) was then used to compare participants’ performance Participants’ Experience on the procedural skill test between the groups. The groups were very similar—mean age in low to mid-20s and Learning—Knows How relatively equally mixed between the first and second year of There was acceptable interrater reliability between the 2 raters medical school. If there was any impact on results of students who evaluated the performance of the chest tube procedure. An being in the first or second year of medical school, it would excellent intraclass correlation coefficient (ICC) of 0.909 was probably negatively influence the intervention group because found for the GRS, and a good ICC of 0.757 was found for the a slightly higher percentage of participants in this group were checklist. Again, limited to nonparametric techniques, we in their first year. However, training on chest tube insertion is created 2 new variables: 1 variable to calculate the difference not part of the standard curriculum in the first 2 years of medical between the pre- and postchecklist and GRS scores, and the school, and most participants indicated that they had never second to calculate the difference between the postchecklist and performed or even witnessed a chest tube placement before; retention checklist and GRS scores. A Kruskal-Wallis test was therefore, the presession materials and this training were the then used to compare pretest, posttest, and retention test scores first exposures to the skill for most participants. The majority for the 3 groups (ie, intervention, comparison, and control) on had participated in low-fidelity SBME using task trainers before, the modified OSATS checklist and GRS scores. between 1 and 10 times, and the majority had never received training using telemedicine. Reaction The Mann-Whitney U test (nonparametric equivalent) was used to compare the intervention with the comparison groups on satisfaction with learning and their evaluation of the training. Table 1. Participants’ experience. Characteristics Intervention group (n=25) Comparison group (n=23) Control group (n=21) Age (years), mean 25 23 21 Level of medical training, n (%) 1st year 16 (64) 6 (26) 9 (43) 2nd year 9 (36) 17 (74) 12 (57) Performed a chest tube insertion before, n (%) Never 24 (96) 22 (96) 20 (95) Yes 1 (4) 1 (4) 2 (5) Witnessed a chest tube insertion before, n (%) Never 22 (88) 20 (87) 15 (71) Yes 3 (12) 3 (13) 6 (29) Participated in simulation-based medical education , n (%) Never 2 (8) 5 (22) 4 (19) 1-10 times 21 (84) 18 (78) 15 (71) >10 times 2 (8) 0 (0) 2 (9.5) Past exposure to telemedicine, n (%) Never 25 (100) 18 (78) 19 (91) At least quarterly 0 (0) 5 (22) 2 (10) Low-fidelity task trainers (eg, suturing pads, airway models, and chest tube placement). http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 10 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Table 2. Questionnaire responses at the time of retention test (1 person from the comparison group and 2 from the control group did not complete the retention test). Characteristics Intervention group (n=25) Comparison group (n=22) Control group (n=19) No, n (%) Yes, n (%) No, n (%) Yes, n (%) No, n (%) Yes, n (%) Performed a chest tube in the past week 23 (92) 2 (8) 22 (100) 0 (0) 19 (100) 0 (0) Witnessed a chest tube in the past week 25 (100) 0 (0) 22 (100) 0 (0) 19 (100) 0 (0) Received any training or done further reading on chest tube 24 (96) 1 (4) 21 (96) 1 (4) 17 (90) 2 (11) insertions in the past week Similarly, the retention test survey, assessing exposure to chest posttest (χ =4.1; P=.13) or from the posttest to the retention tube insertions in the week since the training, showed no real test (χ =1.6; P=.46; Multimedia Appendix 2). differences between the groups. Most had not performed a chest tube since the training, witnessed a chest tube, or received any Learning—Knows How training or done any further reading on chest tube insertions A total of 204 videos of procedural performance were included (Table 2). in the analysis, with 3 videos per participant (3 participants did Learning—Knows not complete the retention test). Results of the modified OSATS checklist and GRS assessment for the 3 groups (pretraining, A Kruskal-Wallis test was used to compare the results of the posttraining, and 1 week after the training) are shown in procedural skills knowledge test. This was a brief written test Multimedia Appendix 3. Box plots of the scores are shown in completed after receiving the presession materials but before Figure 8. the training session. The mean test score (out of a possible score of 15) and SD were 11.52 (2.07) for the intervention group, A Kruskal-Wallis test revealed that there were statistically 10.91 (2.02) for the comparison group, and 10.76 (2.56) for the significant differences between the groups on the pre- and control group. There was no significant difference between the post-OSATS checklist and GRS scores (Multimedia Appendix groups before starting the session (χ =1.9; P=.39). This 4). Pairwise comparisons were performed using Dunn [35] procedure with a Bonferroni correction for multiple indicates that the participants in the 3 groups had similar levels comparisons. This post hoc analysis revealed statistically of written knowledge about chest tube insertion before the significant differences in median OSATS checklist and GRS training. scores differences between the control and comparison and the Subsequent Kruskal-Wallis tests revealed that there were no control and intervention groups, but not between the comparison significant differences between groups from the pretest to the and intervention groups. There was no difference between the posttest and retention scores. Figure 8. Box plots of the modified Objective Structured Assessment of Technical Skills checklist and GRS scores. GRS: Global Rating Scale. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 11 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al intervention and comparison groups are shown in Table 3. On Reaction average, participants rated the teaching methods as helpful and Satisfaction with learning and evaluation of the training effective for the intervention and comparison groups, with scores measures was used to examine participants’ reaction to the 4.52 and 4.65, respectively, out of 5. Averaged responses also training. indicated that they enjoyed how the teacher taught the session Satisfaction with Learning for the intervention and comparison groups with scores 4.40 and 4.52, respectively, out of 5. A Mann-Whitney U test The results of the satisfaction with learning questions (adapted revealed that there were no statistically significant differences from the NLN scales) that were asked in the posttest for the between the intervention and comparison groups on these items. Table 3. Self-reported learning—scale of 1 (strongly disagree) to 5 (strongly agree). Measurement item (satisfaction with learning) Intervention group Comparison group Mann-Whitney U test (n=25), mean (SD) (n=23), mean (SD) U z P value The teaching methods used were helpful and effective. 4.52 (0.71) 4.65 (0.49) 306.5 0.47 .65 I enjoyed how the teacher taught the session. 4.40 (0.82) 4.52 (0.59) 299.0 0.27 .79 overall experience in the MTU (4.32 and 4.43 out of 5 for the Participant Evaluation of Training intervention and comparison groups, respectively), and they Participants in the intervention and comparison groups were would recommend the MTU to their colleagues for SBME (4.32 asked to evaluate their experiences with the training session and 4.43 out of 5 for the intervention and comparison groups, that took place physically in the MTU space. Participants respectively). A Mann-Whitney U test revealed that there were indicated that the MTU could play an important role in rural no statistically significant differences between the intervention medical training (4.32 and 4.48 out of 5 for the intervention and and the comparison groups on any of these questions (Table 4). comparison groups, respectively), they were satisfied with their Table 4. Participants’ evaluation of training modality—scale 1 (strongly disagree) to 5 (strongly agree). Evaluation of training modality Intervention group Comparison group Mann-Whitney U test or t test (n=25), mean (SD) (n=23), mean (SD) U z P value 4.32 (1.11) 4.48 (0.51) 276.0 −0.27 .79 Do you think the MTU could play an important role in rural medical training? How satisfied are you with your overall experience in the 4.32 (0.56) 4.43 (0.59) 319.5 0.38 .45 MTU? Would you recommend the MTU to your colleagues for 4.32 (0.56) 4.43 (0.73) 331.0 1.02 .31 simulation-based medical training? MTU: mobile telesimulation unit. second being the ability to complete all necessary steps (shows Discussion how). Our study focused on shows how, as the ability to physically and capably complete a procedural skill relies on Principal Findings deliberate practice of that skill [36]. Nevertheless, it was Using a conceptual framework based on Kirkpatrick’s and important to measure the procedural skill knowledge (knows), Miller’s works [24,25], we examined learning based on knows as it enabled us to ensure there was a consistent knowledge level and shows how levels and also studied the reactions of the across all groups. This is particularly important, as procedural participants to the training. We found this framework useful to skills training sessions aim to enable participants’ performance help ensure a thorough evaluation of the training delivered using of the procedure (ie, shows how), which is a higher level than an MTU. The results from this study indicate comparable knows. learning (knows and shows how) and reactions of participants With respect to the shows how learning, our study supports who received the procedural skills training remotely with those previous findings related to telesimulation and mobile simulation who received the training face-to-face. [7,8,37,38]. We found that the learning outcomes for the Consistent with the literature, we found that subject’s knowledge participants who received training remotely through the MTU, level (knows) remained unchanged after the training. This is as assessed using modified OSATS checklist and GRS, are what was expected as there are 2 distinct key areas of knowledge comparable with those of the face-to-face simulation-based with respect to competent procedural skills performance—one training group. Furthermore, participants who received training, related to factual background information (knows) and the either remotely or face-to-face, received statistically significantly http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 12 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al better scores than those who did not receive instruction (ie, the contributions made in teaching a variety of learners, often with control group). The average scores on the checklist more than limited resources. A collaborative approach, drawing on local doubled from the pretests to posttests for the intervention (from expertise, along with distance-guided mentorship could facilitate 3 to 6.54) and comparison groups (from 2.96 to 6.22). However, valuable advances. The second and related contribution is the the increase in the scores for the control group was negligible, potential MTU-enabled cost-savings for trainees and mentors. increasing by only 0.33 points (from 2.91 to 3.24). This indicates Cost is often a major barrier to accessing SBME, but it is often that training resulted in similar acquisition of skills-based not considered in SBME research [44]. Traditional delivery knowledge for both the remote training and face-to-face groups. models will have a course, and associated expenses brought to a particular site, such as the related costs of travel, equipment, Retention tests indicated that there were no statistically mentors, and time. The alternative being that the rural significant differences in skills retention between all 3 of the practitioner must travel to a central location to teach and is left groups. On average, differences between the scores on retention to address the challenges of patient coverage, time off, and test–modified and posttest-modified OSATS checklist and GRS expenses relating to the training and travel. By making cost an scores either stayed the same or decreased slightly for all groups. important consideration in the development of the MTU, the From this, we conclude that the manner of instructional delivery intention is to make this novel approach more accessible. An (either remote or face-to-face) does not impact retention. economic impact evaluation relating to the use of the MTU in In addition to the comparable learning outcomes, participants practice is recommended. Third, further studies should be had similarly high levels of satisfaction with learning in the conducted to validate the utility and effectiveness of the MTU MTU. Rating the teaching methods as helpful and effective, concept for skills training that is important to the practice needs participants indicated that, on average, they enjoyed instruction of the target audience. Through collaborative discussion and during the session. This is encouraging as satisfaction with the targeted needs assessments with rural practitioners, the specific training, in the case of the MTU concept facilitated through a clinical and educational needs would best be determined. This local healthcare facility, could influence commitment and would enable the examination of Kirkpatrick’s behavior (and readiness to transfer learning to the workplace at their own site Miller’s does), as well as the results levels of the learning [39,40]. evaluation model. As a broader range of skills sessions are delivered remotely through mobile telesimulation, opportunities Overall, participants evaluated their training experience with to study validity and reliability will be more readily available. the MTU as positive. There were no statistically significant Fourth, further exploration of skill and scenario characteristics differences in evaluations between those who received training that make them amenable to the remote-mentoring approach to remotely versus those who received it face-to-face. Participants training is necessary, including ability to observe key felt that the MTU could play an important role in rural medical performance features and maneuvers. This study demonstrated training, they indicated that they were satisfied with their overall equivalent learning outcomes on assessment of procedural skills experience in the MTU, and they would recommend the MTU for chest tube insertion. This should be further explored for to their colleagues for SBME. other procedural skills. Fifth, training sessions for this study The primary limitation of this study is the relatively small were conducted in areas with reliable, high-speed internet sample size and the inclusion of research subjects from a single access; however, rural and remote areas may have limited institution. However, several things help make the study more internet connectivity, which will impede the delivery of remote robust: (1) the inclusion of a control group; (2) the study design training and may particularly affect how learners perceive and including pretest, posttest, and retention tests; and (3) the rate their remote mentoring experience. Future research will triangulation of the results of the modified OSATS checklist explore the use of purpose-built efficient communications and GRS scores with 2 blinded raters demonstrating a favorable systems designed for low bandwidth. Sixth, as proposed by interrater reliability provides reassurance of the robustness of others [3], future research should compare different forms of the study results [41]. The second limitation is the fact that the simulation. Using mobile telesimulation, this would involve physician who was involved in the design of the MTU is the comparison of training delivered remotely in an MTU using one who led the training sessions for all subjects. It would be different levels of fidelity simulators. Finally, the unavailability interesting to examine the impact on training if a physician not of mentors comfortable with using simulation-based teaching directly involved in the study delivered the training. delivered through telecommunication may present a barrier to expanding this novel approach to SBME [45]. Therefore, the There are a number of implications for future SBME and use of the MTU for the remote assessment of skills should also research. First, there is a shift from delivery of medical be examined, especially in domains that are poorly covered by education in large urban academic centers toward distributed traditional written and oral examinations. medical education. Technologies such as video conferencing and digital library collections have enabled this advancement Conclusions and are tied to social, health, and economic benefits [42,43]. SBME is a well-established training approach, particularly for There is potential for the MTU concept to play a role in this high-acuity, low-occurrence procedures and scenarios. area, and further research is needed to determine how best to Practitioners located in rural and remote locations particularly incorporate this concept into practice. Here, it would be stand to benefit as they face a number of unique challenges with particularly important to consider 2 significant time challenges respect to simulation resources, including geographic, cost, and faced by rural practitioners; the maintenance of busy clinical time constraints. This study describes an evaluation of practice, often with limited backup, in addition to the invaluable http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 13 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al educational efficacy comparing remote versus face-to-face trained had comparable learning outcomes (shows how) to mentoring for procedural skills training. To our knowledge, this subjects who received face-to-face instruction. Participants were study is one of a few to develop and assess SBME combining also satisfied (reaction) with their learning and training the concepts of telesimulation and mobile simulation. experiences. Such remote mentor–led SBME expands opportunities for health practitioners to more easily access the We used a conceptual framework based on the combination of training and mentor-guided practice that they require. Future Kirkpatrick’s Learning Evaluation Model and Miller’s Clinical investigation is needed to examine the utility of the MTU Assessment Framework to guide the study. We found that approach in practice, with different skills and level of fidelity, training delivered remotely through the MTU is an effective and as a means to provide remote assessment of skills. way to conduct a skills session. Those who were remotely Acknowledgments This project has been supported by an Ignite grant awarded by the Research and Development Corporation of Newfoundland and Labrador. The authors thank the following organizations at the Memorial University of Newfoundland: the Tuckamore Simulation Research Collaborative for research support and advice, the Clinical Learning and Simulation Center for equipment and operational support, and Memorial University of Newfoundland MED 3D for the provision of simulation models. The authors also thank the following people for their assistance during this research project: Dr Chrystal Horwood for clinical expertise in video review; Kristopher Hoover for technical assistance and involvement in early MTU prototype development; research assistant, Megan Pollard, Samantha Noseworthy, Sarah Boyd, and Krystal Bursey; Tate Skinner (technical support); Joanne Doyle (Discipline of Emergency Medicine senior secretary); and Memorial University’s Emergency Medicine Interest Group. Conflicts of Interest None declared. Multimedia Appendix 1 Consolidated Standards of Reporting Trials flow diagram. [PDF File (Adobe PDF File), 103KB-Multimedia Appendix 1] Multimedia Appendix 2 Differences between pre, post, and retention procedural skills knowledge tests (written). [PDF File (Adobe PDF File), 113KB-Multimedia Appendix 2] Multimedia Appendix 3 Modified Objective Structured Assessment of Technical Skills checklist and Global Rating Scale assessment of chest tube performance, mean (standard deviation) reported. [PDF File (Adobe PDF File), 80KB-Multimedia Appendix 3] Multimedia Appendix 4 Differences between pre, post, and retention-modified Objective Structured Assessment of Technical Skills checklist and Global Rating Scale test scores. [PDF File (Adobe PDF File), 137KB-Multimedia Appendix 4] Multimedia Appendix 5 CONSORT‐EHEALTH checklist (V 1.6.1). [PDF File (Adobe PDF File), 2MB-Multimedia Appendix 5] References 1. Williams JM, Ehrlich PF, Prescott JE. Emergency medical care in rural America. Ann Emerg Med 2001 Sep;38(3):323-327. [doi: 10.1067/mem.2001.115217] [Medline: 11524654] 2. Roy KM, Miller MP, Schmidt K, Sagy M. Pediatric residents experience a significant decline in their response capabilities to simulated life-threatening events as their training frequency in cardiopulmonary resuscitation decreases. Pediatr Crit Care Med 2011 May;12(3):e141-e144. [doi: 10.1097/PCC.0b013e3181f3a0d1] [Medline: 20921919] http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 14 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 3. Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. J Am Med Assoc 2011 Sep 7;306(9):978-988. [doi: 10.1001/jama.2011.1234] [Medline: 21900138] 4. Scott DJ, Dunnington GL. The new ACS/APDS skills curriculum: moving the learning curve out of the operating room. J Gastrointest Surg 2008 Feb;12(2):213-221. [doi: 10.1007/s11605-007-0357-y] [Medline: 17926105] 5. Issenberg SB, McGaghie WC, Petrusa ER, Lee GD, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005 Jan;27(1):10-28. [doi: 10.1080/01421590500046924] [Medline: 16147767] 6. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med 2003 Aug;78(8):783-788. [doi: 10.1097/01.SIH.0000242724.08501.63] [Medline: 12915366] 7. Rosen MA, Hunt EA, Pronovost PJ, Federowicz MA, Weaver SJ. In situ simulation in continuing education for the health care professions: a systematic review. J Contin Educ Health Prof 2012;32(4):243-254. [doi: 10.1002/chp.21152] [Medline: 23280527] 8. Ikeyama T, Shimizu N, Ohta K. Low-cost and ready-to-go remote-facilitated simulation-based learning. Simul Healthc 2012 Feb;7(1):35-39. [doi: 10.1097/SIH.0b013e31822eacae] [Medline: 22228281] 9. Bischof JJ, Panchal AR, Finnegan GI, Terndrup TE. Creation and validation of a novel mobile simulation laboratory for high fidelity, prehospital, difficult airway simulation. Prehosp Disaster Med 2016 Oct;31(5):465-470. [doi: 10.1017/S1049023X16000534] [Medline: 27530816] 10. Ullman E, Kennedy M, di Delupis FD, Pisanelli P, Burbui AG, Cussen M, et al. The Tuscan mobile simulation program: a description of a program for the delivery of in situ simulation training. Intern Emerg Med 2016 Sep;11(6):837-841. [doi: 10.1007/s11739-016-1401-2] [Medline: 26861702] 11. Xafis V, Babidge W, Field J, Altree M, Marlow N, Maddern G. The efficacy of laparoscopic skills training in a mobile simulation unit compared with a fixed site: a comparative study. Surg Endosc 2013 Jul;27(7):2606-2612. [doi: 10.1007/s00464-013-2798-6] [Medline: 23389073] 12. Weinstock PH, Kappus LJ, Garden A, Burns JP. Simulation at the point of care: reduced-cost, in situ training via a mobile cart. Pediatr Crit Care Med 2009 Mar;10(2):176-181. [doi: 10.1097/PCC.0b013e3181956c6f] [Medline: 19188878] 13. Ireland S, Gray T, Farrow N, Danne P, Flanagan B. Rural mobile simulation-based trauma team training-an innovative educational platform. Int Trauma Care 2006;16:6-12 [FREE Full text] 14. Ohta K, Kurosawa H, Shiima Y, Ikeyama T, Scott J, Hayes S, et al. The effectiveness of remote facilitation in simulation-based pediatric resuscitation training for medical students. Pediatr Emerg Care 2017 Aug;33(8):564-569. [doi: 10.1097/PEC.0000000000000752] [Medline: 27261952] 15. Mikrogianakis A, Kam A, Silver S, Bakanisi B, Henao O, Okrainec A, et al. Telesimulation: an innovative and effective tool for teaching novel intraosseous insertion techniques in developing countries. Acad Emerg Med 2011 Apr;18(4):420-427 [FREE Full text] [doi: 10.1111/j.1553-2712.2011.01038.x] [Medline: 21496146] 16. Schulman CI, Levi J, Sleeman D, Dunkin B, Irvin G, Levi D, et al. Are we training our residents to perform open gall bladder and common bile duct operations? J Surg Res 2007 Oct;142(2):246-249. [doi: 10.1016/j.jss.2007.03.073] [Medline: 17631907] 17. Strongwater AM. Transition to the eighty-hour resident work schedule. J Bone Joint Surg Am 2003 Jun;85(6):1170-1172. [doi: 10.2106/00004623-200306000-00048] [Medline: 12784026] 18. Jewer J, Dubrowski A, Dunne C, Hoover K, Smith A, Parsons M. Piloting a mobile tele-simulation unit to train rural and remote emergency health care providers. In: Wickramasinghe N, Bodendorf F, editors. Delivering Superior Health and Wellness Management with IoT and Analytics. New York: Springer; 2019. 19. Parsons M, Wadden K, Pollard M, Dubrowski A, Smith A. P098: development and evaluation of a mobile simulation lab with acute care telemedicine support. Can J Emerg Med 2016 Jun 2;18(S1):S111. [doi: 10.1017/cem.2016.274] 20. Parsons M, Smith A, Hoover K, Jewer J, Noseworthy S, Pollard M, et al. P100: iterative prototype development of a mobile tele-simulation unit for remote training: an update. Can J Emerg Med 2017 May 15;19(S1):S112. [doi: 10.1017/cem.2017.302] 21. Jewer J, Dubrowski A, Hoover K, Smith A, Parsons M. Development of a Mobile Tele-Simulation Unit Prototype for Training of Rural and Remote Emergency Health Care Providers. In: Proceedings of the 51st Hawaii International Conference in System Sciences. 2018 Presented at: HICSS'18; January 3-6, 2018; Hawaii, United States p. 2894-2903 URL: https:/ /aisel.aisnet.org/hicss-51/hc/ict_for_health_equity/2/ [doi: 10.24251/HICSS.2018.367] 22. Parsons M, Smith A, Rogers P, Hoover K, Pollard M, Dubrowski A. Outcomes of Prototype Development Cycle for a Mobile Simulation Lab With Acute Care Telemedicine Support- Work in Progress. In: 17th International Meeting on Simulation in Healthcare. 2017 Presented at: IMSH'17; January 26-30, 2017; Orlando, FL. 23. Dunne C, Jewer J, Parsons M. P039: application of the Delphi method to refine key components in the iterative development of a mobile tele-simulation unit (MTU). Can J Emerg Med 2018 May 11;20(S1):S70-S71. [doi: 10.1017/cem.2018.237] 24. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. Second Edition. San Francisco: Berrett-Koehler Publishers; 1998. 25. Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990 Sep;65(9 Suppl):S63-S67. [doi: 10.1097/00001888-199009000-00045] [Medline: 2400509] http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 15 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 26. Dubrowski A, Morin M. Evaluating pain education programs: an integrated approach. Pain Res Manag 2011;16(6):407-410 [FREE Full text] [doi: 10.1155/2011/320617] [Medline: 22184548] 27. Moore Jr DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof 2009;29(1):1-15. [doi: 10.1002/chp.20001] [Medline: 19288562] 28. Greene CJ, Morland LA, Durkalski VL, Frueh BC. Noninferiority and equivalence designs: issues and implications for mental health research. J Trauma Stress 2008 Oct;21(5):433-439 [FREE Full text] [doi: 10.1002/jts.20367] [Medline: 18956449] 29. Friedrich M, Bergdolt C, Haubruck P, Bruckner T, Kowalewski K, Müller-Stich BP, et al. App-based serious gaming for training of chest tube insertion: study protocol for a randomized controlled trial. Trials 2017 Dec 6;18(1):56 [FREE Full text] [doi: 10.1186/s13063-017-1799-5] [Medline: 28166840] 30. Dev SP, Nascimiento Jr B, Simone C, Chien V. Videos in clinical medicine. Chest-tube insertion. N Engl J Med 2007 Oct 11;357(15):e15. [doi: 10.1056/NEJMvcm071974] [Medline: 17928590] 31. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative 'bench station' examination. Am J Surg 1997 Mar;173(3):226-230. [doi: 10.1016/s0002-9610(97)89597-9] [Medline: 9124632] 32. Haubruck P, Nickel F, Ober J, Walker T, Bergdolt C, Friedrich M, et al. Evaluation of app-based serious gaming as a training method in teaching chest tube insertion to medical students: randomized controlled trial. J Med Internet Res 2018 Dec 21;20(5):e195 [FREE Full text] [doi: 10.2196/jmir.9956] [Medline: 29784634] 33. National League for Nursing. 2005. Descriptions of Available Instruments URL: http://www.nln.org/ professional-development-programs/research/tools-and-instruments/descriptions-of-available-instruments 34. Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, simulation design scale, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today 2014 Oct;34(10):1298-1304. [doi: 10.1016/j.nedt.2014.06.011] [Medline: 25066650] 35. Dunn OJ. Multiple comparisons using rank sums. Technometrics 1964 Aug;6(3):241-252. [doi: 10.2307/1266041] 36. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med 2008 Nov;15(11):988-994 [FREE Full text] [doi: 10.1111/j.1553-2712.2008.00227.x] [Medline: 18778378] 37. Treloar D, Hawayek J, Montgomery JR, Russell W, Medical Readiness Trainer Team. On-site and distance education of emergency medicine personnel with a human patient simulator. Mil Med 2001 Nov;166(11):1003-1006. [doi: 10.1093/milmed/166.11.1003] [Medline: 11725312] 38. Okrainec A, Vassiliou M, Kapoor A, Pitzul K, Henao O, Kaneva P, et al. Feasibility of remote administration of the fundamentals of laparoscopic surgery (FLS) skills test. Surg Endosc 2013 Nov;27(11):4033-4037. [doi: 10.1007/s00464-013-3048-7] [Medline: 24018759] 39. Mansour JB, Naji A, Leclerc A. The relationship between training satisfaction and the readiness to transfer learning: the mediating role of normative commitment. Sustainability 2017 May 16;9(5):834. [doi: 10.3390/su9050834] 40. Lim DH, Morris ML. Influence of trainee characteristics, instructional satisfaction, and organizational climate on perceived learning and training transfer. Hum Resour Dev Q 2006;17(1):85-115. [doi: 10.1002/hrdq.1162] 41. Lineberry M, Walwanis M, Reni J. Comparative research on training simulators in emergency medicine: a methodological review. Simul Healthc 2013 Aug;8(4):253-261. [doi: 10.1097/SIH.0b013e31828715b1] [Medline: 23508094] 42. Ellaway R, Bates J. Distributed medical education in Canada. Can Med Educ J 2018 Mar;9(1):e1-e5 [FREE Full text] [Medline: 30140329] 43. Lemky K, Gagne P, Konkin J, Stobbe K, Fearon G, Blom S, et al. A review of methods to assess the economic impact of distributed medical education (DME) in Canada. Can Med Educ J 2018 Mar;9(1):e87-e99 [FREE Full text] [Medline: 30140340] 44. Zendejas B, Wang AT, Brydges R, Hamstra SJ, Cook DA. Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery 2013 Feb;153(2):160-176. [doi: 10.1016/j.surg.2012.06.025] [Medline: 22884087] 45. Hayden EM, Navedo DD, Gordon JA. Web-conferenced simulation sessions: a satisfaction survey of clinical simulation encounters via remote supervision. Telemed J E Health 2012 Sep;18(7):525-529. [doi: 10.1089/tmj.2011.0217] [Medline: 22827475] Abbreviations GRS: Global Rating Scale ICC: intraclass correlation coefficient MTU: mobile telesimulation unit NLN: National League of Nursing OSATS: Objective Structured Assessment of Technical Skills SBME: simulation-based medical education http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 16 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Edited by G Eysenbach; submitted 06.05.19; peer-reviewed by M Reade, C Knopp; comments to author 20.06.19; revised version received 04.07.19; accepted 05.07.19; published 06.08.19 Please cite as: Jewer J, Parsons MH, Dunne C, Smith A, Dubrowski A Evaluation of a Mobile Telesimulation Unit to Train Rural and Remote Practitioners on High-Acuity Low-Occurrence Procedures: Pilot Randomized Controlled Trial J Med Internet Res 2019;21(8):e14587 URL: http://www.jmir.org/2019/8/e14587/ doi: 10.2196/14587 PMID: ©Jennifer Jewer, Michael H Parsons, Cody Dunne, Andrew Smith, Adam Dubrowski. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.08.2019. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 17 (page number not for citation purposes) XSL FO RenderX http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Medical Internet Research JMIR Publications

Evaluation of a Mobile Telesimulation Unit to Train Rural and Remote Practitioners on High-Acuity Low-Occurrence Procedures: Pilot Randomized Controlled Trial

Loading next page...
 
/lp/jmir-publications/evaluation-of-a-mobile-telesimulation-unit-to-train-rural-and-remote-DbK0RBBuRm

References (43)

Publisher
JMIR Publications
Copyright
Copyright © The Author(s). Licensed under Creative Commons Attribution cc-by 4.0
ISSN
1438-8871
DOI
10.2196/14587
Publisher site
See Article on Publisher Site

Abstract

Background: The provision of acute medical care in rural and remote areas presents unique challenges for practitioners. Therefore, a tailored approach to training providers would prove beneficial. Although simulation-based medical education (SBME) has been shown to be effective, access to such training can be difficult and costly in rural and remote areas. Objective: The aim of this study was to evaluate the educational efficacy of simulation-based training of an acute care procedure delivered remotely, using a portable, self-contained unit outfitted with off-the-shelf and low-cost telecommunications equipment (mobile telesimulation unit, MTU), versus the traditional face-to-face approach. A conceptual framework based on a combination of Kirkpatrick’s Learning Evaluation Model and Miller’s Clinical Assessment Framework was used. Methods: A written procedural skills test was used to assess Miller’s learning level— knows —at 3 points in time: preinstruction, immediately postinstruction, and 1 week later. To assess procedural performance (shows how), participants were video recorded performing chest tube insertion before and after hands-on supervised training. A modified Objective Structured Assessment of Technical Skills (OSATS) checklist and a Global Rating Scale (GRS) of operative performance were used by a blinded rater to assess participants’ performance. Kirkpatrick’s reaction was measured through subject completion of a survey on satisfaction with the learning experiences and an evaluation of training. Results: A total of 69 medical students participated in the study. Students were randomly assigned to 1 of the following 3 groups: comparison (25/69, 36%), intervention (23/69, 33%), or control (21/69, 31%). For knows, as expected, no significant differences were found between the groups on written knowledge (posttest, P=.13). For shows how, no significant differences were found between the comparison and intervention groups on the procedural skills learning outcomes immediately after the training (OSATS checklist and GRS, P=1.00). However, significant differences were found for the control versus comparison groups (OSATS checklist, P<.001; GRS, P=.02) and the control versus intervention groups (OSATS checklist, P<.001; GRS, P=.01) on the pre- and postprocedural performance. For reaction, there were no statistically significant differences between the intervention and comparison groups on the satisfaction with learning items (P=.65 and P=.79) or the evaluation of the training (P=.79, P=.45, and P=.31). Conclusions: Our results demonstrate that simulation-based training delivered remotely, applying our MTU concept, can be an effective way to teach procedural skills. Participants trained remotely in the MTU had comparable learning outcomes (shows how) to those trained face-to-face. Both groups received statistically significant higher procedural performance scores than those in the control group. Participants in both instruction groups were equally satisfied with their learning and training (reaction). We http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 1 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al believe that mobile telesimulation could be an effective way of providing expert mentorship and overcoming a number of barriers to delivering SBME in rural and remote locations. (J Med Internet Res 2019;21(8):e14587) doi: 10.2196/14587 KEYWORDS medical education; distributed medical education; simulation training; emergency medicine; rural health; remote-facilitation; assessment; chest tubes Through an iterative design process, our multidisciplinary group Introduction has developed an MTU that explores many of the challenges to the delivery of SBME to rural and remote acute care Challenges Accessing Simulation-Based Medical practitioners. The intention is the deployment of the MTU at a Education rural or remote location that could house the skills training The provision of acute care in rural and remote areas presents session through communication with an off-site, skilled mentor. unique challenges. Skills related to high-acuity low-occurrence Such a deployment would provide trainees with the appropriate procedures and clinical encounters are particularly susceptible simulation equipment, a standardized training environment, and to degradation over time and are inadequately served through access to an experienced mentor to guide the training. To our on-the-job experience alone [1]. Therefore, a systematic knowledge, this is one of the few such units, which combines approach to training personnel for these procedures is required. telecommunication and mobile simulation to deliver such In recent years, an increasing proportion of this training has training. made use of simulation-based modalities. Simulation-based A rigorous, theory-based, iterative approach was followed to medical education (SBME) has been shown to be an effective develop the MTU and evaluate the acceptability and feasibility training approach because it can provide opportunities to of delivering training remotely using the unit. Details on the practice infrequently encountered procedures [2-5] without development of the MTU and training materials have been compromising patient safety [6]. However, SBME often takes published elsewhere [18-23]. place in urban centers, and it can be difficult for rural and remote acute care practitioners to access these centers because of The objective of this study was to compare the educational geographic, cost, and time constraints [7,8]. efficacy of face-to-face versus remote delivery of educational content with respect to learner’s perceptions and objective SBME delivered through technologies such as telesimulation assessment of procedural performance. and mobile simulation has been shown to be an effective means of training medical practitioners and has helped to address some Framework for Learning Assessment of the above constraints [4,7-17]. However, use of these This study uses a conceptual framework based on a combination technologies is accompanied by their own challenges. of Kirkpatrick’s Learning Evaluation Model [24] and Miller’s Telesimulation involves delivering SBME over the internet, but Clinical Assessment Framework [25] to guide the assessment effective delivery of telesimulation training can be limited if of the MTU. This model (Figure 1; adapted from Dubrowski et the trainees are unable to access simulation equipment or an al [26]) is based on the work of Moore et al [27] who developed efficient training setup. Mobile simulation can address a framework “of an ideal approach to planning and assessing constraints by delivering an immersive simulation environment continuing medical education that is focused on achieving in a purposefully designed unit. However, mobile simulation desired outcomes” (pg 3). The new model incorporates often involves bringing an expert to rural and remote sites to Kirkpatrick’s 4 levels, which represent a sequence of ways to facilitate the session. This can often prove to be quite expensive evaluate a program, with Miller’s assessment tools for each and prohibitive because of time constraints. level of competence. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 2 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 1. Framework for Learning Assessment, based on Kirkpatrick (left) and Miller (right). Adapted from Dubrowski et al [26]. The base of Kirkpatrick’s model relates to subject reaction, practitioners is of particular interest in the province, as 40% of measuring how participants react to or perceive program content. the population lives in rural areas, and the province has a There is no direct correlation of this feature to a level on Miller’s relatively small population (525,000) distributed across a large framework. The second level of the Kirkpatrick model, learning, geographic area (405,000 km ). Acute care is delivered at a corresponds to the bottom 3 levels of Miller’s framework variety of health centers and hospitals across the province. These (knows, knows how, and shows how), whereas the third level of sites are staffed by physicians, nurses, and nurse practitioners Kirkpatrick’s framework, behavior, is closely related to the top with varying levels of experience. Access to SBME of Miller’s framework, does. Finally, the top level of opportunities is often limited. Health Research Ethics Board of Kirkpatrick’s model, results, does not relate to Miller’s Memorial University of Newfoundland approved this study. framework. This study examines Kirkpatrick’s reaction and The MTU consists of an inflatable rapid deployment tent (Figure learning, consisting of knows and shows how. We do not 2), which is outfitted with portable technology necessary to examine knows how because of anticipated challenges of subject allow for 2-way communication between the trainees and the retention and expected loss to follow-up during the study. mentor: laptop with communications software, monitor, camera, Rather, we decided to measure the higher level shows how speaker, and microphone and a portable wireless internet hub. because we could evaluate the participants’ performance of the The mentor uses comparable software, a camera, speaker, and procedure during the study. We do not examine Kirkpatrick’s microphone to communicate with the trainees. Off-the-shelf behavior and, consequently, do not examine Miller’s does. We and low-cost equipment was used to keep the design of the also do not examine Miller’s results, as these are assessments MTU accessible and practical. Both the trainees and the mentor of practice in a clinical setting, and this study is limited to an would have similar simulation supplies and setup to enable experimental setting. This paper discusses the findings in efficient demonstration and instruction (Figures 3 and 4). Studies relation to Kirkpatrick’s reaction and learning (consisting of by Jewer et al provide more information on the MTU [18,21]. Miller’s knows and shows how) levels. The eventual goal was to deliver simulation-based training Methods remotely through the use of a self-contained vehicle outfitted with simulation equipment necessary for delivery of a number Research Setting of scenarios. However, for the purpose of our test-of-concept This study was conducted at Memorial University of approach, a portable and rapid deployment tent was used. Newfoundland. Training of rural and remote acute care http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 3 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 2. The mobile telesimulation unit rapid deployment tent. Figure 3. Overview of the setup for the mentor and the trainees in the mobile telesimulation unit. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 4 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 4. The interior of the mobile telesimulation unit demonstrating setup for procedural training. conducted before the training (pretest), after the training Study Design (posttest), and 1 week later (retention test). During the pretest, A randomized controlled trial design was followed. A total of participants completed a questionnaire on demographic 3 sessions were held to compare the learning outcomes of information, the number of times they performed or witnessed participants who received training remotely in the MTU versus a chest tube insertion before this session, their previous those who received the same training face-to-face. To minimize experience with SBME, and their previous experience with variables affecting study outcomes, face-to-face training sessions telemedicine. Next, participants completed a written procedural also took place in the MTU space. A control group (ie, received skills knowledge test on a number of chest tube no training) was included to show that the intervention group procedure-specific questions. The demographic questionnaire (ie, remote) was not inferior to the comparison group (ie, and the procedural skills knowledge test were written face-to-face), and that both instructional approaches are actually components used to assess whether there were differences in effective [28]. the baseline knowledge about the chest tube procedure within or between the groups at the start of the study. The procedural The sessions focused on teaching an important high-acuity skills knowledge test was also used to measure learning after low-occurrence procedure, chest tube insertion, using a the session. This corresponds to the knows level of learning. low-fidelity setup: 3D-printed ribs, secured to a plexiglass stand, These materials were reviewed by an experienced emergency covered with low-cost simulated skin, and subcutaneous tissue medicine physician to determine if differences existed. (Figure 4). The chest tube insertion was selected as a representative procedure because it is an essential skill in acute To measure shows how, during the pretest, participants were care settings requiring precision [29], and it is a multistep video recorded performing a chest tube insertion on a procedure amenable to objective scoring. The training sessions low-fidelity simulated model (Figure 6). A modified Objective were 20-min long and consisted of simulation-based training, Structured Assessment of Technical Skills (OSATS) checklist with deliberate hands-on practice and mentor feedback. and a Global Rating Scale (GRS) of operative performance were used to assess procedural performance [31]. Figure 5 depicts the flow of the study procedure. A week before the procedural session, participants were emailed presession After the training session, during the posttest, participants in information consisting of a Web-based New England Journal the intervention and comparison groups were asked to evaluate of Medicine video, demonstrating proper performance of the their satisfaction with learning and their evaluation of the procedure and important details about chest tube insertion training. This corresponds to Kirkpatrick’s reaction level of the including indications, contraindications, complications, and learning framework. Participants also completed the written necessary equipment [30]. This was to help ensure that procedural skills knowledge test again (ie, knows). All participants started with a similar base level of knowledge. participants were then once again video recorded performing a chest tube insertion (ie, shows how). Participants were randomly assigned to 1 of 3 groups: intervention, comparison, and control. Testing procedures were http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 5 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Furthermore, 1 week after the training session (retention test), and GRS to assess the participants’ performance on the video the participants completed a questionnaire on their experiences recordings. The reviewing physician was blinded to participants’ with the procedure in the past week. They also completed the identity and was unaware of the phase of the study (pretest, written procedural skills knowledge test again (ie, knows), and posttest, or retention test). Overall, 12% of the videos were they were video recorded for the third time performing a chest randomly selected for review by a second experienced tube insertion (ie, shows how). emergency medicine physician. The modified OSATS checklist and GRS scores were used as the primary indicators of learning An emergency medicine physician with 11 years of clinical outcomes (ie, shows how). emergency room experience used the modified OSATS checklist Figure 5. Study design. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 6 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 6. Setup used in the video recording of the chest tube procedure (A) and example of a completed chest tube insertion (B). enabled us to more clearly measure learning. Participants Participants provided informed consent before enrollment in the study. Medical students during their first and second year of training Measures (approximately 80 students per cohort) were invited to participate in the study. Participation was voluntary and was Learning—Knows limited by the number of slots available at a scheduled data To measure the knows dimension of learning, participants were collection time (Multimedia Appendix 1). These medical asked a set of chest tube procedure–specific questions (Textbox students were novices in the chest tube procedure, and using 1). such subjects with similar background knowledge and skills Textbox 1. Procedural skills knowledge test questions (possible score: 15). Name 3 indications for chest tube placement. Name 3 contraindications to chest tube placement. Name 4 potential complications of chest tube placement. Name 5 essential pieces of equipment for chest tube placement. of this study, 1 item of the scale was removed because it was Learning—Knows How not relevant to our training scenario (ie, item #9—a Pleur-evac Participants’ performance of the chest tube procedure was setup was not available to participants). The GRS is composed evaluated using a modified OSATS checklist to measure the of 9 items, each measuring a different aspect of operative knows how dimension of learning. The OSATS checklist was performance. Each item was graded on a 5-point Likert scale originally developed and validated to assess the performance from 1, poor performance, to 5, good performance (Figure 7). of multiple surgical procedures at different stations [31]. It has Again, for the purposes of this study, 2 items of the scale were since been used to assess the performance of a single surgical removed because they were not appropriate for the training procedure [32]. Research has demonstrated that the OSATS has scenario (removed items included use of assistants and high reliability and construct validity for measuring technical knowledge of instruments because there was no assistant in the abilities outside of the operating room [31]. study design and knowledge of instruments implied participants were asking for the right things or saying the right names, This study used a modified OSATS checklist and a GRS of something which was not part of the study design). Thus, the operative performance. The checklist consists of 10 items that maximum GRS score attainable is 35 points, and the minimum are scored as done correctly or not (Textbox 2). For the purposes is 7 points. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 7 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Textbox 2. Checklist for chest tube insertion (not done, incorrect=0; done, correct=1). Injects local anesthetic Cuts skin with scalpel to subcutaneous tissue plane (no scything) Uses blunt dissection to enter chest cavity Enters pleural space above rib Checks position with digit before inserting chest tube Inserts chest tube safely using Kelly at the tip of the tube Inserts correct length of chest tube into chest Secures chest tube to chest wall with silk or nylon Connects tube and secures to drainage system with tape Applies airtight dressing http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 8 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Figure 7. Global Rating Scale of operative performance. adapted from the National League of Nursing (NLN) Student Reaction Satisfaction and Self-Confidence in Learning scales [33]. These To measure participants’ reactions to the training, participants NLN scales have been widely used and have been found to have in the remote and comparison groups were asked to evaluate sufficient reliability and validity to be used in education research the training by indicating whether they thought the MTU could [33,34]. play an important role in rural and remote medical training, how Data Analysis satisfied they were with their overall experience in the MTU, and if they would recommend the MTU approach to their Participants were assigned a unique identifier, and this was used colleagues. Participants were also asked to indicate their to anonymize the data before analysis with respect to their satisfaction with the learning experiences. These measures were training group. Data analysis was completed using SPSS version http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 9 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 25. Descriptive statistics were computed for the demographic For all tests, a P value less than .05 was considered statistically variables. significant. Learning—Knows Results Because our data did not enable us to use the parametric repeated measures analysis of variance to analyze the pretest, posttest, In total, 69 medical students participated in the study across the and retention written procedural skills tests, we created 2 new 3 different sessions (Table 1). Participants were randomly variables (pretest minus posttest score and posttest minus assigned to their study group: intervention, comparison, or retention test score). The Kruskal-Wallis test (nonparametric control groups. equivalent) was then used to compare participants’ performance Participants’ Experience on the procedural skill test between the groups. The groups were very similar—mean age in low to mid-20s and Learning—Knows How relatively equally mixed between the first and second year of There was acceptable interrater reliability between the 2 raters medical school. If there was any impact on results of students who evaluated the performance of the chest tube procedure. An being in the first or second year of medical school, it would excellent intraclass correlation coefficient (ICC) of 0.909 was probably negatively influence the intervention group because found for the GRS, and a good ICC of 0.757 was found for the a slightly higher percentage of participants in this group were checklist. Again, limited to nonparametric techniques, we in their first year. However, training on chest tube insertion is created 2 new variables: 1 variable to calculate the difference not part of the standard curriculum in the first 2 years of medical between the pre- and postchecklist and GRS scores, and the school, and most participants indicated that they had never second to calculate the difference between the postchecklist and performed or even witnessed a chest tube placement before; retention checklist and GRS scores. A Kruskal-Wallis test was therefore, the presession materials and this training were the then used to compare pretest, posttest, and retention test scores first exposures to the skill for most participants. The majority for the 3 groups (ie, intervention, comparison, and control) on had participated in low-fidelity SBME using task trainers before, the modified OSATS checklist and GRS scores. between 1 and 10 times, and the majority had never received training using telemedicine. Reaction The Mann-Whitney U test (nonparametric equivalent) was used to compare the intervention with the comparison groups on satisfaction with learning and their evaluation of the training. Table 1. Participants’ experience. Characteristics Intervention group (n=25) Comparison group (n=23) Control group (n=21) Age (years), mean 25 23 21 Level of medical training, n (%) 1st year 16 (64) 6 (26) 9 (43) 2nd year 9 (36) 17 (74) 12 (57) Performed a chest tube insertion before, n (%) Never 24 (96) 22 (96) 20 (95) Yes 1 (4) 1 (4) 2 (5) Witnessed a chest tube insertion before, n (%) Never 22 (88) 20 (87) 15 (71) Yes 3 (12) 3 (13) 6 (29) Participated in simulation-based medical education , n (%) Never 2 (8) 5 (22) 4 (19) 1-10 times 21 (84) 18 (78) 15 (71) >10 times 2 (8) 0 (0) 2 (9.5) Past exposure to telemedicine, n (%) Never 25 (100) 18 (78) 19 (91) At least quarterly 0 (0) 5 (22) 2 (10) Low-fidelity task trainers (eg, suturing pads, airway models, and chest tube placement). http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 10 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Table 2. Questionnaire responses at the time of retention test (1 person from the comparison group and 2 from the control group did not complete the retention test). Characteristics Intervention group (n=25) Comparison group (n=22) Control group (n=19) No, n (%) Yes, n (%) No, n (%) Yes, n (%) No, n (%) Yes, n (%) Performed a chest tube in the past week 23 (92) 2 (8) 22 (100) 0 (0) 19 (100) 0 (0) Witnessed a chest tube in the past week 25 (100) 0 (0) 22 (100) 0 (0) 19 (100) 0 (0) Received any training or done further reading on chest tube 24 (96) 1 (4) 21 (96) 1 (4) 17 (90) 2 (11) insertions in the past week Similarly, the retention test survey, assessing exposure to chest posttest (χ =4.1; P=.13) or from the posttest to the retention tube insertions in the week since the training, showed no real test (χ =1.6; P=.46; Multimedia Appendix 2). differences between the groups. Most had not performed a chest tube since the training, witnessed a chest tube, or received any Learning—Knows How training or done any further reading on chest tube insertions A total of 204 videos of procedural performance were included (Table 2). in the analysis, with 3 videos per participant (3 participants did Learning—Knows not complete the retention test). Results of the modified OSATS checklist and GRS assessment for the 3 groups (pretraining, A Kruskal-Wallis test was used to compare the results of the posttraining, and 1 week after the training) are shown in procedural skills knowledge test. This was a brief written test Multimedia Appendix 3. Box plots of the scores are shown in completed after receiving the presession materials but before Figure 8. the training session. The mean test score (out of a possible score of 15) and SD were 11.52 (2.07) for the intervention group, A Kruskal-Wallis test revealed that there were statistically 10.91 (2.02) for the comparison group, and 10.76 (2.56) for the significant differences between the groups on the pre- and control group. There was no significant difference between the post-OSATS checklist and GRS scores (Multimedia Appendix groups before starting the session (χ =1.9; P=.39). This 4). Pairwise comparisons were performed using Dunn [35] procedure with a Bonferroni correction for multiple indicates that the participants in the 3 groups had similar levels comparisons. This post hoc analysis revealed statistically of written knowledge about chest tube insertion before the significant differences in median OSATS checklist and GRS training. scores differences between the control and comparison and the Subsequent Kruskal-Wallis tests revealed that there were no control and intervention groups, but not between the comparison significant differences between groups from the pretest to the and intervention groups. There was no difference between the posttest and retention scores. Figure 8. Box plots of the modified Objective Structured Assessment of Technical Skills checklist and GRS scores. GRS: Global Rating Scale. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 11 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al intervention and comparison groups are shown in Table 3. On Reaction average, participants rated the teaching methods as helpful and Satisfaction with learning and evaluation of the training effective for the intervention and comparison groups, with scores measures was used to examine participants’ reaction to the 4.52 and 4.65, respectively, out of 5. Averaged responses also training. indicated that they enjoyed how the teacher taught the session Satisfaction with Learning for the intervention and comparison groups with scores 4.40 and 4.52, respectively, out of 5. A Mann-Whitney U test The results of the satisfaction with learning questions (adapted revealed that there were no statistically significant differences from the NLN scales) that were asked in the posttest for the between the intervention and comparison groups on these items. Table 3. Self-reported learning—scale of 1 (strongly disagree) to 5 (strongly agree). Measurement item (satisfaction with learning) Intervention group Comparison group Mann-Whitney U test (n=25), mean (SD) (n=23), mean (SD) U z P value The teaching methods used were helpful and effective. 4.52 (0.71) 4.65 (0.49) 306.5 0.47 .65 I enjoyed how the teacher taught the session. 4.40 (0.82) 4.52 (0.59) 299.0 0.27 .79 overall experience in the MTU (4.32 and 4.43 out of 5 for the Participant Evaluation of Training intervention and comparison groups, respectively), and they Participants in the intervention and comparison groups were would recommend the MTU to their colleagues for SBME (4.32 asked to evaluate their experiences with the training session and 4.43 out of 5 for the intervention and comparison groups, that took place physically in the MTU space. Participants respectively). A Mann-Whitney U test revealed that there were indicated that the MTU could play an important role in rural no statistically significant differences between the intervention medical training (4.32 and 4.48 out of 5 for the intervention and and the comparison groups on any of these questions (Table 4). comparison groups, respectively), they were satisfied with their Table 4. Participants’ evaluation of training modality—scale 1 (strongly disagree) to 5 (strongly agree). Evaluation of training modality Intervention group Comparison group Mann-Whitney U test or t test (n=25), mean (SD) (n=23), mean (SD) U z P value 4.32 (1.11) 4.48 (0.51) 276.0 −0.27 .79 Do you think the MTU could play an important role in rural medical training? How satisfied are you with your overall experience in the 4.32 (0.56) 4.43 (0.59) 319.5 0.38 .45 MTU? Would you recommend the MTU to your colleagues for 4.32 (0.56) 4.43 (0.73) 331.0 1.02 .31 simulation-based medical training? MTU: mobile telesimulation unit. second being the ability to complete all necessary steps (shows Discussion how). Our study focused on shows how, as the ability to physically and capably complete a procedural skill relies on Principal Findings deliberate practice of that skill [36]. Nevertheless, it was Using a conceptual framework based on Kirkpatrick’s and important to measure the procedural skill knowledge (knows), Miller’s works [24,25], we examined learning based on knows as it enabled us to ensure there was a consistent knowledge level and shows how levels and also studied the reactions of the across all groups. This is particularly important, as procedural participants to the training. We found this framework useful to skills training sessions aim to enable participants’ performance help ensure a thorough evaluation of the training delivered using of the procedure (ie, shows how), which is a higher level than an MTU. The results from this study indicate comparable knows. learning (knows and shows how) and reactions of participants With respect to the shows how learning, our study supports who received the procedural skills training remotely with those previous findings related to telesimulation and mobile simulation who received the training face-to-face. [7,8,37,38]. We found that the learning outcomes for the Consistent with the literature, we found that subject’s knowledge participants who received training remotely through the MTU, level (knows) remained unchanged after the training. This is as assessed using modified OSATS checklist and GRS, are what was expected as there are 2 distinct key areas of knowledge comparable with those of the face-to-face simulation-based with respect to competent procedural skills performance—one training group. Furthermore, participants who received training, related to factual background information (knows) and the either remotely or face-to-face, received statistically significantly http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 12 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al better scores than those who did not receive instruction (ie, the contributions made in teaching a variety of learners, often with control group). The average scores on the checklist more than limited resources. A collaborative approach, drawing on local doubled from the pretests to posttests for the intervention (from expertise, along with distance-guided mentorship could facilitate 3 to 6.54) and comparison groups (from 2.96 to 6.22). However, valuable advances. The second and related contribution is the the increase in the scores for the control group was negligible, potential MTU-enabled cost-savings for trainees and mentors. increasing by only 0.33 points (from 2.91 to 3.24). This indicates Cost is often a major barrier to accessing SBME, but it is often that training resulted in similar acquisition of skills-based not considered in SBME research [44]. Traditional delivery knowledge for both the remote training and face-to-face groups. models will have a course, and associated expenses brought to a particular site, such as the related costs of travel, equipment, Retention tests indicated that there were no statistically mentors, and time. The alternative being that the rural significant differences in skills retention between all 3 of the practitioner must travel to a central location to teach and is left groups. On average, differences between the scores on retention to address the challenges of patient coverage, time off, and test–modified and posttest-modified OSATS checklist and GRS expenses relating to the training and travel. By making cost an scores either stayed the same or decreased slightly for all groups. important consideration in the development of the MTU, the From this, we conclude that the manner of instructional delivery intention is to make this novel approach more accessible. An (either remote or face-to-face) does not impact retention. economic impact evaluation relating to the use of the MTU in In addition to the comparable learning outcomes, participants practice is recommended. Third, further studies should be had similarly high levels of satisfaction with learning in the conducted to validate the utility and effectiveness of the MTU MTU. Rating the teaching methods as helpful and effective, concept for skills training that is important to the practice needs participants indicated that, on average, they enjoyed instruction of the target audience. Through collaborative discussion and during the session. This is encouraging as satisfaction with the targeted needs assessments with rural practitioners, the specific training, in the case of the MTU concept facilitated through a clinical and educational needs would best be determined. This local healthcare facility, could influence commitment and would enable the examination of Kirkpatrick’s behavior (and readiness to transfer learning to the workplace at their own site Miller’s does), as well as the results levels of the learning [39,40]. evaluation model. As a broader range of skills sessions are delivered remotely through mobile telesimulation, opportunities Overall, participants evaluated their training experience with to study validity and reliability will be more readily available. the MTU as positive. There were no statistically significant Fourth, further exploration of skill and scenario characteristics differences in evaluations between those who received training that make them amenable to the remote-mentoring approach to remotely versus those who received it face-to-face. Participants training is necessary, including ability to observe key felt that the MTU could play an important role in rural medical performance features and maneuvers. This study demonstrated training, they indicated that they were satisfied with their overall equivalent learning outcomes on assessment of procedural skills experience in the MTU, and they would recommend the MTU for chest tube insertion. This should be further explored for to their colleagues for SBME. other procedural skills. Fifth, training sessions for this study The primary limitation of this study is the relatively small were conducted in areas with reliable, high-speed internet sample size and the inclusion of research subjects from a single access; however, rural and remote areas may have limited institution. However, several things help make the study more internet connectivity, which will impede the delivery of remote robust: (1) the inclusion of a control group; (2) the study design training and may particularly affect how learners perceive and including pretest, posttest, and retention tests; and (3) the rate their remote mentoring experience. Future research will triangulation of the results of the modified OSATS checklist explore the use of purpose-built efficient communications and GRS scores with 2 blinded raters demonstrating a favorable systems designed for low bandwidth. Sixth, as proposed by interrater reliability provides reassurance of the robustness of others [3], future research should compare different forms of the study results [41]. The second limitation is the fact that the simulation. Using mobile telesimulation, this would involve physician who was involved in the design of the MTU is the comparison of training delivered remotely in an MTU using one who led the training sessions for all subjects. It would be different levels of fidelity simulators. Finally, the unavailability interesting to examine the impact on training if a physician not of mentors comfortable with using simulation-based teaching directly involved in the study delivered the training. delivered through telecommunication may present a barrier to expanding this novel approach to SBME [45]. Therefore, the There are a number of implications for future SBME and use of the MTU for the remote assessment of skills should also research. First, there is a shift from delivery of medical be examined, especially in domains that are poorly covered by education in large urban academic centers toward distributed traditional written and oral examinations. medical education. Technologies such as video conferencing and digital library collections have enabled this advancement Conclusions and are tied to social, health, and economic benefits [42,43]. SBME is a well-established training approach, particularly for There is potential for the MTU concept to play a role in this high-acuity, low-occurrence procedures and scenarios. area, and further research is needed to determine how best to Practitioners located in rural and remote locations particularly incorporate this concept into practice. Here, it would be stand to benefit as they face a number of unique challenges with particularly important to consider 2 significant time challenges respect to simulation resources, including geographic, cost, and faced by rural practitioners; the maintenance of busy clinical time constraints. This study describes an evaluation of practice, often with limited backup, in addition to the invaluable http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 13 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al educational efficacy comparing remote versus face-to-face trained had comparable learning outcomes (shows how) to mentoring for procedural skills training. To our knowledge, this subjects who received face-to-face instruction. Participants were study is one of a few to develop and assess SBME combining also satisfied (reaction) with their learning and training the concepts of telesimulation and mobile simulation. experiences. Such remote mentor–led SBME expands opportunities for health practitioners to more easily access the We used a conceptual framework based on the combination of training and mentor-guided practice that they require. Future Kirkpatrick’s Learning Evaluation Model and Miller’s Clinical investigation is needed to examine the utility of the MTU Assessment Framework to guide the study. We found that approach in practice, with different skills and level of fidelity, training delivered remotely through the MTU is an effective and as a means to provide remote assessment of skills. way to conduct a skills session. Those who were remotely Acknowledgments This project has been supported by an Ignite grant awarded by the Research and Development Corporation of Newfoundland and Labrador. The authors thank the following organizations at the Memorial University of Newfoundland: the Tuckamore Simulation Research Collaborative for research support and advice, the Clinical Learning and Simulation Center for equipment and operational support, and Memorial University of Newfoundland MED 3D for the provision of simulation models. The authors also thank the following people for their assistance during this research project: Dr Chrystal Horwood for clinical expertise in video review; Kristopher Hoover for technical assistance and involvement in early MTU prototype development; research assistant, Megan Pollard, Samantha Noseworthy, Sarah Boyd, and Krystal Bursey; Tate Skinner (technical support); Joanne Doyle (Discipline of Emergency Medicine senior secretary); and Memorial University’s Emergency Medicine Interest Group. Conflicts of Interest None declared. Multimedia Appendix 1 Consolidated Standards of Reporting Trials flow diagram. [PDF File (Adobe PDF File), 103KB-Multimedia Appendix 1] Multimedia Appendix 2 Differences between pre, post, and retention procedural skills knowledge tests (written). [PDF File (Adobe PDF File), 113KB-Multimedia Appendix 2] Multimedia Appendix 3 Modified Objective Structured Assessment of Technical Skills checklist and Global Rating Scale assessment of chest tube performance, mean (standard deviation) reported. [PDF File (Adobe PDF File), 80KB-Multimedia Appendix 3] Multimedia Appendix 4 Differences between pre, post, and retention-modified Objective Structured Assessment of Technical Skills checklist and Global Rating Scale test scores. [PDF File (Adobe PDF File), 137KB-Multimedia Appendix 4] Multimedia Appendix 5 CONSORT‐EHEALTH checklist (V 1.6.1). [PDF File (Adobe PDF File), 2MB-Multimedia Appendix 5] References 1. Williams JM, Ehrlich PF, Prescott JE. Emergency medical care in rural America. Ann Emerg Med 2001 Sep;38(3):323-327. [doi: 10.1067/mem.2001.115217] [Medline: 11524654] 2. Roy KM, Miller MP, Schmidt K, Sagy M. Pediatric residents experience a significant decline in their response capabilities to simulated life-threatening events as their training frequency in cardiopulmonary resuscitation decreases. Pediatr Crit Care Med 2011 May;12(3):e141-e144. [doi: 10.1097/PCC.0b013e3181f3a0d1] [Medline: 20921919] http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 14 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 3. Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. J Am Med Assoc 2011 Sep 7;306(9):978-988. [doi: 10.1001/jama.2011.1234] [Medline: 21900138] 4. Scott DJ, Dunnington GL. The new ACS/APDS skills curriculum: moving the learning curve out of the operating room. J Gastrointest Surg 2008 Feb;12(2):213-221. [doi: 10.1007/s11605-007-0357-y] [Medline: 17926105] 5. Issenberg SB, McGaghie WC, Petrusa ER, Lee GD, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005 Jan;27(1):10-28. [doi: 10.1080/01421590500046924] [Medline: 16147767] 6. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med 2003 Aug;78(8):783-788. [doi: 10.1097/01.SIH.0000242724.08501.63] [Medline: 12915366] 7. Rosen MA, Hunt EA, Pronovost PJ, Federowicz MA, Weaver SJ. In situ simulation in continuing education for the health care professions: a systematic review. J Contin Educ Health Prof 2012;32(4):243-254. [doi: 10.1002/chp.21152] [Medline: 23280527] 8. Ikeyama T, Shimizu N, Ohta K. Low-cost and ready-to-go remote-facilitated simulation-based learning. Simul Healthc 2012 Feb;7(1):35-39. [doi: 10.1097/SIH.0b013e31822eacae] [Medline: 22228281] 9. Bischof JJ, Panchal AR, Finnegan GI, Terndrup TE. Creation and validation of a novel mobile simulation laboratory for high fidelity, prehospital, difficult airway simulation. Prehosp Disaster Med 2016 Oct;31(5):465-470. [doi: 10.1017/S1049023X16000534] [Medline: 27530816] 10. Ullman E, Kennedy M, di Delupis FD, Pisanelli P, Burbui AG, Cussen M, et al. The Tuscan mobile simulation program: a description of a program for the delivery of in situ simulation training. Intern Emerg Med 2016 Sep;11(6):837-841. [doi: 10.1007/s11739-016-1401-2] [Medline: 26861702] 11. Xafis V, Babidge W, Field J, Altree M, Marlow N, Maddern G. The efficacy of laparoscopic skills training in a mobile simulation unit compared with a fixed site: a comparative study. Surg Endosc 2013 Jul;27(7):2606-2612. [doi: 10.1007/s00464-013-2798-6] [Medline: 23389073] 12. Weinstock PH, Kappus LJ, Garden A, Burns JP. Simulation at the point of care: reduced-cost, in situ training via a mobile cart. Pediatr Crit Care Med 2009 Mar;10(2):176-181. [doi: 10.1097/PCC.0b013e3181956c6f] [Medline: 19188878] 13. Ireland S, Gray T, Farrow N, Danne P, Flanagan B. Rural mobile simulation-based trauma team training-an innovative educational platform. Int Trauma Care 2006;16:6-12 [FREE Full text] 14. Ohta K, Kurosawa H, Shiima Y, Ikeyama T, Scott J, Hayes S, et al. The effectiveness of remote facilitation in simulation-based pediatric resuscitation training for medical students. Pediatr Emerg Care 2017 Aug;33(8):564-569. [doi: 10.1097/PEC.0000000000000752] [Medline: 27261952] 15. Mikrogianakis A, Kam A, Silver S, Bakanisi B, Henao O, Okrainec A, et al. Telesimulation: an innovative and effective tool for teaching novel intraosseous insertion techniques in developing countries. Acad Emerg Med 2011 Apr;18(4):420-427 [FREE Full text] [doi: 10.1111/j.1553-2712.2011.01038.x] [Medline: 21496146] 16. Schulman CI, Levi J, Sleeman D, Dunkin B, Irvin G, Levi D, et al. Are we training our residents to perform open gall bladder and common bile duct operations? J Surg Res 2007 Oct;142(2):246-249. [doi: 10.1016/j.jss.2007.03.073] [Medline: 17631907] 17. Strongwater AM. Transition to the eighty-hour resident work schedule. J Bone Joint Surg Am 2003 Jun;85(6):1170-1172. [doi: 10.2106/00004623-200306000-00048] [Medline: 12784026] 18. Jewer J, Dubrowski A, Dunne C, Hoover K, Smith A, Parsons M. Piloting a mobile tele-simulation unit to train rural and remote emergency health care providers. In: Wickramasinghe N, Bodendorf F, editors. Delivering Superior Health and Wellness Management with IoT and Analytics. New York: Springer; 2019. 19. Parsons M, Wadden K, Pollard M, Dubrowski A, Smith A. P098: development and evaluation of a mobile simulation lab with acute care telemedicine support. Can J Emerg Med 2016 Jun 2;18(S1):S111. [doi: 10.1017/cem.2016.274] 20. Parsons M, Smith A, Hoover K, Jewer J, Noseworthy S, Pollard M, et al. P100: iterative prototype development of a mobile tele-simulation unit for remote training: an update. Can J Emerg Med 2017 May 15;19(S1):S112. [doi: 10.1017/cem.2017.302] 21. Jewer J, Dubrowski A, Hoover K, Smith A, Parsons M. Development of a Mobile Tele-Simulation Unit Prototype for Training of Rural and Remote Emergency Health Care Providers. In: Proceedings of the 51st Hawaii International Conference in System Sciences. 2018 Presented at: HICSS'18; January 3-6, 2018; Hawaii, United States p. 2894-2903 URL: https:/ /aisel.aisnet.org/hicss-51/hc/ict_for_health_equity/2/ [doi: 10.24251/HICSS.2018.367] 22. Parsons M, Smith A, Rogers P, Hoover K, Pollard M, Dubrowski A. Outcomes of Prototype Development Cycle for a Mobile Simulation Lab With Acute Care Telemedicine Support- Work in Progress. In: 17th International Meeting on Simulation in Healthcare. 2017 Presented at: IMSH'17; January 26-30, 2017; Orlando, FL. 23. Dunne C, Jewer J, Parsons M. P039: application of the Delphi method to refine key components in the iterative development of a mobile tele-simulation unit (MTU). Can J Emerg Med 2018 May 11;20(S1):S70-S71. [doi: 10.1017/cem.2018.237] 24. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. Second Edition. San Francisco: Berrett-Koehler Publishers; 1998. 25. Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990 Sep;65(9 Suppl):S63-S67. [doi: 10.1097/00001888-199009000-00045] [Medline: 2400509] http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 15 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al 26. Dubrowski A, Morin M. Evaluating pain education programs: an integrated approach. Pain Res Manag 2011;16(6):407-410 [FREE Full text] [doi: 10.1155/2011/320617] [Medline: 22184548] 27. Moore Jr DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof 2009;29(1):1-15. [doi: 10.1002/chp.20001] [Medline: 19288562] 28. Greene CJ, Morland LA, Durkalski VL, Frueh BC. Noninferiority and equivalence designs: issues and implications for mental health research. J Trauma Stress 2008 Oct;21(5):433-439 [FREE Full text] [doi: 10.1002/jts.20367] [Medline: 18956449] 29. Friedrich M, Bergdolt C, Haubruck P, Bruckner T, Kowalewski K, Müller-Stich BP, et al. App-based serious gaming for training of chest tube insertion: study protocol for a randomized controlled trial. Trials 2017 Dec 6;18(1):56 [FREE Full text] [doi: 10.1186/s13063-017-1799-5] [Medline: 28166840] 30. Dev SP, Nascimiento Jr B, Simone C, Chien V. Videos in clinical medicine. Chest-tube insertion. N Engl J Med 2007 Oct 11;357(15):e15. [doi: 10.1056/NEJMvcm071974] [Medline: 17928590] 31. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative 'bench station' examination. Am J Surg 1997 Mar;173(3):226-230. [doi: 10.1016/s0002-9610(97)89597-9] [Medline: 9124632] 32. Haubruck P, Nickel F, Ober J, Walker T, Bergdolt C, Friedrich M, et al. Evaluation of app-based serious gaming as a training method in teaching chest tube insertion to medical students: randomized controlled trial. J Med Internet Res 2018 Dec 21;20(5):e195 [FREE Full text] [doi: 10.2196/jmir.9956] [Medline: 29784634] 33. National League for Nursing. 2005. Descriptions of Available Instruments URL: http://www.nln.org/ professional-development-programs/research/tools-and-instruments/descriptions-of-available-instruments 34. Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, simulation design scale, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today 2014 Oct;34(10):1298-1304. [doi: 10.1016/j.nedt.2014.06.011] [Medline: 25066650] 35. Dunn OJ. Multiple comparisons using rank sums. Technometrics 1964 Aug;6(3):241-252. [doi: 10.2307/1266041] 36. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med 2008 Nov;15(11):988-994 [FREE Full text] [doi: 10.1111/j.1553-2712.2008.00227.x] [Medline: 18778378] 37. Treloar D, Hawayek J, Montgomery JR, Russell W, Medical Readiness Trainer Team. On-site and distance education of emergency medicine personnel with a human patient simulator. Mil Med 2001 Nov;166(11):1003-1006. [doi: 10.1093/milmed/166.11.1003] [Medline: 11725312] 38. Okrainec A, Vassiliou M, Kapoor A, Pitzul K, Henao O, Kaneva P, et al. Feasibility of remote administration of the fundamentals of laparoscopic surgery (FLS) skills test. Surg Endosc 2013 Nov;27(11):4033-4037. [doi: 10.1007/s00464-013-3048-7] [Medline: 24018759] 39. Mansour JB, Naji A, Leclerc A. The relationship between training satisfaction and the readiness to transfer learning: the mediating role of normative commitment. Sustainability 2017 May 16;9(5):834. [doi: 10.3390/su9050834] 40. Lim DH, Morris ML. Influence of trainee characteristics, instructional satisfaction, and organizational climate on perceived learning and training transfer. Hum Resour Dev Q 2006;17(1):85-115. [doi: 10.1002/hrdq.1162] 41. Lineberry M, Walwanis M, Reni J. Comparative research on training simulators in emergency medicine: a methodological review. Simul Healthc 2013 Aug;8(4):253-261. [doi: 10.1097/SIH.0b013e31828715b1] [Medline: 23508094] 42. Ellaway R, Bates J. Distributed medical education in Canada. Can Med Educ J 2018 Mar;9(1):e1-e5 [FREE Full text] [Medline: 30140329] 43. Lemky K, Gagne P, Konkin J, Stobbe K, Fearon G, Blom S, et al. A review of methods to assess the economic impact of distributed medical education (DME) in Canada. Can Med Educ J 2018 Mar;9(1):e87-e99 [FREE Full text] [Medline: 30140340] 44. Zendejas B, Wang AT, Brydges R, Hamstra SJ, Cook DA. Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery 2013 Feb;153(2):160-176. [doi: 10.1016/j.surg.2012.06.025] [Medline: 22884087] 45. Hayden EM, Navedo DD, Gordon JA. Web-conferenced simulation sessions: a satisfaction survey of clinical simulation encounters via remote supervision. Telemed J E Health 2012 Sep;18(7):525-529. [doi: 10.1089/tmj.2011.0217] [Medline: 22827475] Abbreviations GRS: Global Rating Scale ICC: intraclass correlation coefficient MTU: mobile telesimulation unit NLN: National League of Nursing OSATS: Objective Structured Assessment of Technical Skills SBME: simulation-based medical education http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 16 (page number not for citation purposes) XSL FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Jewer et al Edited by G Eysenbach; submitted 06.05.19; peer-reviewed by M Reade, C Knopp; comments to author 20.06.19; revised version received 04.07.19; accepted 05.07.19; published 06.08.19 Please cite as: Jewer J, Parsons MH, Dunne C, Smith A, Dubrowski A Evaluation of a Mobile Telesimulation Unit to Train Rural and Remote Practitioners on High-Acuity Low-Occurrence Procedures: Pilot Randomized Controlled Trial J Med Internet Res 2019;21(8):e14587 URL: http://www.jmir.org/2019/8/e14587/ doi: 10.2196/14587 PMID: ©Jennifer Jewer, Michael H Parsons, Cody Dunne, Andrew Smith, Adam Dubrowski. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.08.2019. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. http://www.jmir.org/2019/8/e14587/ J Med Internet Res 2019 | vol. 21 | iss. 8 | e14587 | p. 17 (page number not for citation purposes) XSL FO RenderX

Journal

Journal of Medical Internet ResearchJMIR Publications

Published: Aug 6, 2019

Keywords: medical education; distributed medical education; simulation training; emergency medicine; rural health; remote-facilitation; assessment; chest tubes

There are no references for this article.