125 - Ungraded Assessment Items for Assessing Pre-clerkship Campus Comparability
Monday, March 25, 2024
10:15am – 12:15pm US EDT
Location: Sheraton Hall
Poster Board Number: 125
There are separate poster presentation times for odd and even posters.
Odd poster #s – first hour
Even poster #s – second hour
Co-authors:
Cassie Eno, PhD - Medical Education - Creighton University
Creighton University Omaha, Nebraska, United States
Abstract Body : When used effectively, multiple choice questions (MCQs) can direct both student learning and faculty teaching. MCQs require careful monitoring to ensure exam and item quality. Despite what is known about best practices for MCQ assessment, medical schools vary considerably in their performance on common content and standard setting practices. Furthermore, faculty report low confidence in making assessment judgments that are consistent with their peers. Although the LCME requires medical schools to provide evidence of how campus comparability is assessed and acted upon, there is minimal literature describing approaches or evaluations of such efforts during the pre-clerkship curriculum. The purpose of this project is to evaluate the integration of ungraded questions into the pre-clerkship curriculum assessment and campus comparability strategy.
During AY2021-22, Creighton University School of Medicine in Omaha, Nebraska added a second pre-clerkship campus in Phoenix, Arizona. Since MCQ assessments constitute a large portion of the pre-clerkship assessment, MCQ assessments were a large focus of the pre-clerkship campus comparability strategy. To assist in the evaluation of campus comparability, we added three to four ungraded questions to each quiz. Ungraded questions were used as a mechanism to assess (1) students’ conceptual understanding of content independent of faculty-written questions and (2) the range of performance variability between two regional campuses. Following each quiz, we analyzed student performance overall and on individual questions. Questions were flagged for follow-up review if there was a greater than 2 SD (overall exam SD) difference between performance on both campuses. Flagged questions were reevaluated to determine if content was taught in a sufficient manner on both campuses to enable students to identify the best answer. Yearend analysis compared the performance of graded and ungraded questions and the percentage of the items removed due to comparability issues.
Overall, 11.5% (N = 149/1297) of graded questions and 10.5% (N = 11/105) of ungraded questions showed a difference of larger than 2 standard deviations. Of those graded questions, 14.6% (N = 21; 1.6% of all questions) were dropped due to comparability concerns. These analyses have identified strong evidence of comparability while also allowing curricular improvements to be strategically targeted and evidence based. In turn, faculty use this evidence to enhance students' educational experiences. Since this is an area not well represented in the literature, it could provide a model for institutions with multiple pre-clerkship campuses to use in assessment of pre-clerkship comparability.