Idea: Medical educators interested in online learning have confidence in their ability to effect knowledge and the tools exist to demonstrate success. Yet, online learning is limited by the inability to show that the learning experience effects higher levels in Bloom's Taxonomy. How do we understand if an online course is succeeding beyond its
knowledge improvement goal?
Why the idea was necessary: Students took 5 online courses on the topic of addiction that were broken down into Prevalence, Detection and Diagnosis, Comorbidities, PCP Role, and Pharmacology. The courses demonstrated significant changes in knowledge in a very small sample (N=10) with p ranging from .033 to .001 and average knowledge scores improving from 14% to 52%. Self-efficacy too showed statistically significant improvement (p<.001) even with the very small sample. The students rated themselves as being able to effectively screen and detect opioid abuse following the learning experience. There was not enough of a difference in scores to detect a difference in attitudes and intended behavior. Unfortunately the standardized patient interview showed no change in clinical skills.
What was done: This project utilized a remote standardized patient (SP) experience to test, examine, and improve the learning. We planned to use the ratings of the SP experience to identify the weaknesses in student performance and modify the online learning to address them. After the first round of students completed the study, we looked at the data from both the Modified Skills Inventory (a subjective rating tool completed only by the standardized patient) and the Evaluator Rating Sheet (an objective checklist completed by both the standardized patient and an observer).
Evaluation: Results from the Modified Skills Inventory showed general improvement for all participants post-experience. However, the Evaluator Rating Sheet had some problematic areas; clinical areas which were expected to improve post-experience did not improve. Specifically, students consistently failed to screen the standardized patient using a standardized measure, such as the CAGE-AID. Mental status exams were also rarely conducted, and students showed difficulty in screening patients for psychiatric co-morbid conditions. We evaluated our existing curriculum, and found weaknesses in the level of detail paid to these areas. Pages covering these content areas were expanded to further
illustrate these teaching points.
Conclusions: A remote SP experience is a useful tool in the development of an online educational intervention. It can ensure that the student is truly mastering skills and integrating learning into their clinical experience framework.