140 - What Difference Does It Make? Evaluating the Impact of Different Learning Modalities and Environments on Test Performance and Perceived Workload
Monday, March 25, 2024
10:15am – 12:15pm US EDT
Location: Sheraton Hall
Poster Board Number: 140
There are separate poster presentation times for odd and even posters.
Odd poster #s – first hour
Even poster #s – second hour
Co-authors:
Aida Esmaelbeigi - Bachelor of Health Sciences (Honours) Program - McMaster University; Julia Issa - Honours Biology and Psychology, Neuroscience and Behaviour - McMaster University; Alyssandra Mammoliti - Honours Biology and Psychology, Neuroscience and Behaviour - McMaster University; Jennifer McBride - Department of Education, Innovation and Technology - Baylor College of Medicine; Josh Mitchell - Education Program in Anatomy - McMaster University; Ranil Sonnadara - Department of Surgery - McMaster University; David Mazierksi - Biomedical Communications - University of Toronto Mississauga; Bruce Wainman - Department of Pathology & Molecular Medicine - McMaster University
Graduate Student McMaster University Hamilton, Ontario, Canada
Abstract Body : Introduction: Creating new tools for learning anatomy requires many decisions to be made: What technologies should be used? Are digital modalities better than physical ones? Where should learning take place? In a 3x2 experimental design, we explored the impact of learning modality (virtual reality (VR), 3D printed model (3DPM), or planar computer display (2D)) and environment (clinical or contextless blackout) on test scores and perceived workload when using a pelvic anatomy learning module. We hypothesized that test scores would be highest and workload lowest when learning with 3DPMs. We also hypothesized that the enriched clinical environment would increase test scores but would also increase workload.
Methods: 120 participants were randomly assigned to 1 of 6 learning conditions. Participants completed an anatomy pre-test assessment prior to viewing the learning module. After, they completed the NASA Task Load Index (NASA-TLX), Simulator Sickness Questionnaire (SSQ), an anatomy post-test assessment, the Mental Rotations Test (MRT), User Engagement Scale (UES), and a feedback survey about their learning experience.
Results: Deltas of the mean pre-test and post-test assessment scores were compared, with MRT, NASA-TLX, SSQ, and UES scores as covariates. Adjusted means (±SE) for test scores (%) were 27.12% (3.08) for VR, 22.08% (2.90) for 3DPM, and 19.60% (3.01) for 2D. Preliminary analysis using a two-way ANCOVA revealed neither a significant main effect of modality [F(2,110)= 1.43, p = .245, ηp2 = .025] nor environment [F(1,110) = 2.43, p = .122, ηp2 = .022]. The interaction between modality and environment was nonsignificant [F(2,110) = 2.47, p = .089, ηp2= .043].
Adjusted means (±SE) for overall NASA-TLX scores, controlling for MRT, SSQ, and UES scores, were 58.5 (2.6) for VR, 62.0 (2.5) for 3DPM, and 64.9 (2.6) for 2D. Preliminary analysis using a two-way ANCOVA revealed neither a significant main effect of modality [F(2,111)= 1.33, p = .269, ηp2 = .023] nor environment [F(1,111) = .002, p = .963, ηp2 = .000]. The interaction between modality and environment was nonsignificant [F(2,111) = 2.76, p = .068, ηp2 = .047].
Significance: While mean test scores differed to some extent, modality does not seem to meaningfully affect test performance. Additionally, the impact of the enriched clinical environment was not different from the contextless blackout environment, suggesting that efforts to create elaborate contextual environments in learning modules is largely unnecessary.
Conclusion: Rather than considering all modalities and environments to be equally effective, the fact that test performance and subjective workload were the same across conditions suggests that pedagogical decisions should be made on the basis of good educational practice rather than the learning tool.