top of page

Assessment Development & Student Evaluation

 

Medical education involves learning and performance in multiple settings; meanwhile, student competence must be assessed both through formative means, that provide learners with guidance for further progression in their abilities, and through summative means, that can qualify students’ levels of achievement at a given point. Because of the variety of roles and abilities requiring assessment in multiple settings over time, the ability to assess student performance is complicated, and may require a variety of tools specific to relevant settings (Bandiera et al., 2006).

 

Having mostly been a clinical educator throughout my training, providing constructive feedback based on my observations has been my primary formative assessment tool, which has generally been appreciated by learners who seek greater insight into their weaknesses as well as specific suggestions as to how they can improve their competencies. The corollary is also true, as strengths should be highlighted not just to build confidence within learners, but also to make them cognizant of areas where they can continue along the same path. Direct observation has also been deemed “central to the assessment of performance” under the CanMEDS framework (Bandiera et al., 2006, p. 3).

Much like teaching, I believe that assessments should be provided more frequently for junior learners, with progressively greater self-reflection and monitoring as one progresses in their abilities. Expectations should also differ depending on the stage of the learner, and opportunities should be provided wherever possible for learners to revise their performance and in-progress evaluations based on formative feedback. By genuinely approaching and framing assessments as tools for learning and growth, and pairing this with potential learning plans, such feedback can be as instructive as it is evaluative.

 

That being said, I have also enjoyed my work outside the clinical realm in assessment tools, which has allowed for collaboration with a variety of paediatric educators, and has created unique opportunities for me in developing new skills through such scholarship in medical education.

PedsCases

PedsCases (www.pedscases.com) is an innovative paediatric online resource, which began in 2008 as a medical student-initiated project with faculty support at the University of Alberta. Students were encouraged to submit questions and interactive cases for others to work through online, thus creating a self-assessment tool to help supplement paediatric undergraduate education. As a resident, I became a major contributor of multiple choice content, creating over 50 questions, along with justifications for both the correct and incorrect answers and a description of other associated concepts in the responses to guide self-assessment and fuel further inquiry. As PedsCases expanded to involve interactive cases, podcasts, videos, and links to supplement learning, with over 55,000 unique visitors from over 150 countries, I also became a site editor to assist with peer review of all content, and helped advise and co-authour case-based learning.

 

The PedsCases team has since grown to incorporate newer students and Faculty across Canada, and I will be continuing as a PedsCases alumni team member. My focus has thus shifted towards mentoring students and residents in the creation of podcasts and cases.

 

Reference: 
Gill P, Kitney L, Kozan D, Lewis M. Online Learning in Pediatrics: A Student Led Web-Based Learning Modality. The Clinical Teacher. 2010 Mar;7(1):53-57.

National Practice Examination

Prior to 2013, there had been no Canadian-developed means of providing formative in-training examinations for paediatric residents, in order to help develop and assess resident knowledge and the clinical application of this knowledge to paediatrics. However, an annual national practice examination was developed in collaboration with paediatric fellows from across Canada, in order to provide a uniform experience to trainees at both institutions. Proposed benefits of such examinations are both to the residents, whose scores compared to a large group of residents at their stage of training may provide them with a better sense of how they are doing, and to Canadian paediatric residency training programs, where differences in performance of residents in various areas may provide valuable feedback as to the strengths and weaknesses of its trainees compared to those in other program. Such information may then be used to guide educational activities to address any areas of relative weakness. 

 

For the 2014 spring and fall iterations of this examination, I was asked to develop both short answer and multiple choice questions (SAQ/MCQ) testing knowledge in both clinical pharmacology and respiratory medicine. In total, I submitted 2 SAQ each for clinical pharmacology and respiratory medicine, and 4 MCQ in clinical pharmacology as well as 2 MCQ in respiratory medicine. Summary marks and feedback for these questions are being collected.

Night Float Assessment System

The night float rotation had been in place for a number of years in the Paediatric Residency Program at the University of Alberta. Its purpose was to free up residents on their CTU (clinical teaching unit) rotations from doing overnight call (and thus from being post-call) and to provide continuity for inpatients. It consists of five, 15-hour shifts per week not including handover time, with in-house coverage for all inpatient care, as well as consultations for all inpatient admissions provided by one senior and one junior paediatric resident, as well as an off-service resident and anywhere between one and five medical students. While there is considerable potential for education through this work, with an opportunity to function with more autonomy and with greater responsibility for a larger number of diverse patients than that seen during regular hours, residents were not previously receiving structured feedback on the rotation.

 

Dr. Karen Forbes had led a faculty-proposed night float assessment system to try and address the educational and assessment needs of this rotation. However, this proposal was unsatisfactory to residents for several reasons. I became involved in this project in order to provide a voice to the residents by analyzing and articulating concerns with the proposal. I had concluded that for one, during these 75-hour work weeks, this demanding multi-source feedback proposal called for residents to assess one another weekly and every medical student on a nightly basis, while simultaneously making it the resident’s responsibility to have staff evaluate them on each shift and receive an evaluation by phone or face to face with attending physicians. I also took issue with this evaluation system allowing for input from staff for 1-2 out of the 3 teams, given the fact that the third team had a completely different patient population and multiple covering physicians. Further, because attending staff almost invariably remained at home throughout the night, they had no opportunity to directly observe resident practices in a variety of CanMEDs competency domains, only receiving a snapshot of a few of the varied tasks being completed by residents in the care of the team through brief communications. I had also feared that asking staff to assess too many competencies based on indirect communication could lead to them completing assessments based on pre-conceived notions of residents formed through previous daytime CTU observations, rather than on the current particular stretch of nights. Finally, while I saw value in peer evaluation, I felt that it would be challenging for residents to assign specific scores to one another in an identifiable fashion.

 

From these concerns, I met with Dr. Forbes and two of the paediatric chief residents over the 2011 year, and proposed solutions that were integrated into the assessments that took place. Specifically, I had requested that:

  • Medical students be administered evaluation forms for senior paediatric residents, with optional evaluations of any junior residents that they had worked with in some capacity.

  • Pediatric residents fill out evaluation forms for one another based on a rubric, to provide clear feedback that could be justified based on the various statements within that rubric. Residents would be asked to evaluate one another within each domain as needing improvement, satisfactory, or well-developed. Prior proposals had called for reciprocal resident evaluations of junior and senior residents involving a 5-point Likert scale.

  • On-call attending physicians evaluate residents based on specific case presentations discussed by phone, which could assume two forms: reviewing a consultation for admission (including discussion of history, physical examination, investigations, impression, problem list, and plans), or reviewing acute clinical problems for inpatients that require re-assessment and a change in plans. Residents would request that the on-call attending complete the assessments prior to presentation, so that the on-call attending physician is primed to provide assessment and feedback. This would be formalized, with physicians able to request and complete evaluations online (through One45) immediately after the case discussion. Evaluations would take the form of encounter cards.

  • Time management and teaching be removed as domains in which staff would evaluate residents, because of the absence of direct observation. Additionally, “not able to assess” should be added as a potential choice on evaluation forms, given the variability of service and collaboration between residents and staff.

 

Dr. Forbes and other collaborators agreed with these recommendations, which were all ultimately incorporated into the night float evaluation proposal that was approved by the residency training committee in January 2012 to be implemented as a six-month pilot project. I am happy to state that both residents and staff were generally happy with this evaluation system, and it has been continued to present.

 

Examples of evaluations generated:

Other

OSCE (Objective Structured Clinical Examination) Examiner

 

An OSCE evaluates learner performace either formatively (TOSCE), or summatively, as each student encounters a series of stations representing different clinical scenarios that may involve standardized patient encounters, oral examinations, interpretations of visual information or other investigations, or running simulations (Bandiera et al., 2006). As an OSCE examiner, I have enjoyed another opportunity to directly observe and provide feedback on the performance of students in specific skills or contents.

 

2013                 Teaching OSCE (TOSCE) Examiner, Pediatric Clerkship, MDCN 508, MD Program, University of Calgary

 

2011                 OSCE Examiner, Year 4 Exams, MED 540, MD Program, University of Alberta

 

2010                 OSCE Examiner, Pre-clinical Exam, MED 520, MD Program, University of Alberta

 

2009-2011       OSCE/TOSCE Examiner, Years 1 & 3, MD Program, University of Alberta

 

ITER (In-Training Evaluation Report)

 

Through my close work on a clinical basis with residents and medical students, I have been able to observe and assess real-time performance of other learners in authentic clinical settings. While I believe that longitudinal, formative feedback is more important in helping students, there is also value in summarizing these observations in an ITER, which I have either completed directly or have assisted staff in completing for clinical trainees since becoming a senior resident.

 

 

 

References

 

Bandiera, G., Sherbino, J., Frank, J. R. (2006). The CanMEDS assessment tools handbook. An introductory guide to assessment methods for the CanMEDS competencies. Ottawa: The Royal College of Physicians and Surgeons of Canada.

bottom of page