112 - The Role of Artificial Intelligence in Creating a Novel Study Resource for First Year Medical Students
Saturday, March 23, 2024
5:00pm – 7:00pm US EDT
Location: Sheraton Hall
Poster Board Number: 112
There are separate poster presentation times for odd and even posters.
Odd poster #s – first hour
Even poster #s – second hour
Co-authors:
Helen Hayes - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences; Evan Hicks - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences; Vanessa Vyas - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences; John Lee - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences; Humam Shahare - University of Arkansas for Medical Sciences; Erica Malone, PhD - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences; Tiffany Huitt - Director of Anatomical Gift Program, Neurobiology and Developmental Sciences, University of Arkansas for Medical Sciences; David Davies, PhD - Neurobiology and Developmental Sciences - University of Arkansas for Medical Sciences
Medical Student University of Arkansas for Medical Sciences Little Rock, Arkansas, United States
Abstract Body : Introduction and Objective: The rapid evolution of Artificial Intelligence (AI) and its applications in medicine have prompted educational institutions to explore its use in medical education as a learning tool. Spaced repetition is a proven learning resource utilized widely by medical students though Anki. This study aims to investigate the capability of the ChatGPT 4, a generative AI model, to create a comparable study resource for medical students. The objective is to assess the accuracy and usability of AI-generated Anki flashcards as an ancillary study tool for first-year medical students (M1) enrolled in an introductory human anatomy course.
Methods: ChatGPT 4 was used to generate a set of flashcards containing content related to the human anatomy learning objectives. An approved study question bank widely used by students at the University of Arkansas for Medical Sciences (UAMS) was provided to the ChatGPT 4 model as the source input. The AI-generated flashcards were evaluated for accuracy by three UAMS anatomy course professors. The cards were then provided to 175 M1’s as an additional study resource. Student subjective experiences were analyzed via survey with open-ended questions and Likert scale to gauge resource efficacy.
Results: 528 flashcards were built using ChatGPT 4. 267 of the cards were vetted by UAMS anatomists with doctorates in anatomy. 14 of the 267 vetted cards (5.2%) contained the same information, and repeated cards were excluded from further analysis. Of the unique cards, 79% were rated “accurate”, whereas 21% were rated “not accurate” or “partially accurate.” 30 students responded to the survey questions regarding the deck usage, 86.7% of which reported utilizing the deck. On a Likert scale survey response, 75% of the group that used the AI generated resource believed it was somewhat useful but not more than other available resources, while 25% reported it was similar in usefulness to other resources. The most common subjective complication reported was the number of duplicated cards.
Conclusion: The initial findings indicate that ChatGPT 4 demonstrates potential in generating an accurate and valuable study resource for medical students. However, human oversight is essential to ensure the accuracy and relevance of the AI-generated content. Analysis of the results, including student subjective data, indicates the continued need for human involvement in AI generated study resources.
Significance/Implication: The use of AI in medical education cannot be undervalued, and we expect its applications and uses in medicine to advance in the next decade. This study indicates significance in exploring the potential of AI in the creation of educational resources for medical education, as well as highlighting current advancements and limitations.