141 - Taking the AI OSPE to the Next Level: Leveraging User Perspectives and Needs for Quality Improvements with Q-method
Monday, March 25, 2024
10:15am – 12:15pm US EDT
Location: Sheraton Hall
Poster Board Number: 141
There are separate poster presentation times for odd and even posters.
Odd poster #s – first hour
Even poster #s – second hour
Co-authors:
Sarada Rajyam - Faculty of Health Sciences - McMaster University; Layla Rahimpour - Faculty of Health Sciences - McMaster University; Josh Mitchell - McMaster University; Jason Bernard - Vector Institute; Bruce Wainman - Faculty of Health Sciences - McMaster University; Yasmeen Mezil - Faculty of Health Sciences - McMaster University; Kristina Durham - Faculty of Health Sciences - McMaster University
Abstract Body : Introduction & Objective: The artificial intelligence objective structured practical exam (AI OSPE) is an innovative, web-based study tool that employs both AI for immediate answer grading and spaced repetition to maximize learning. User-centric design was adopted during tool conception and incorporated throughout tool refinement phases. Here we describe the use of Q-methodology to inform and drive continuous and conscientious quality improvements based on factor analysis, statement rankings, and open-text feedback.
Materials & Methods: The AI OSPE tool development was an iterative process that began with a multidisciplinary focus group to ascertain tool requirements. Teams developed OSPE questions with a correct/incorrect answer key, engineered the website, and integrated decision tree AI and spaced repetition algorithms. Semiquantitative feedback on Version 1 of the AI OSPE was collected from a cohort of MSc (OT) students using Q-method; study results were used to develop Version 2 of the tool.
Results: Patterns in respondent preferences, consensus rankings, and extreme rankings paired with open-text feedback, all collected by Q-method, were used to guide improvements made to the tool. Data from the first quality assurance study indicated significant user satisfaction with the ease-of-use and accessibility of the tool and no significant distrust towards AI. Open-text responses identified user interface and gamification as priority areas for improvement. In response, the interface was updated according to WCAG 2.0 accessibility guidelines, including an expanded user control system, and the experience was updated by implementing spaced-repetition and progress tracking features. User metrics were also collected to assess tool performance and user engagement; data from Version 2 of the tool shows that in a 6 day period where undergraduate users were able to use the tool to study for an upcoming test, questions completed per day by slightly increased from 2649 on day 1 to 2697 on day 6, indicating user retention and satisfaction.
Conclusion: Q-methodology results are well-suited to understand user preferences, needs, and challenges associated with the use of study tools. Student and educator perspectives on the implemented improvements will be evaluated using Q-methodology to ensure changes made to the tool are user-centric.
Significance: Insights gained from the quality assurance studies are imperative to informing evidence-based educational tool development and highlights the value of user feedback, iterative design and real-world testing in creating effective educational tools that students will use and that educators can trust and adopt.