#MedEdJ Conversation Kickstarter | The Effect of Continuing Professional Development

Featured

The following post is by one of our e-Council members, Robert R. Cooney MD, MSMedEd, RDMS, FAAEM, FACEP.  Robert is the Associate Program Director for the Emergency Medicine Residency Program at Conemaugh Memorial Medical Center in Johnstown, PA.  Thanks for tuning in and conversing with us.  – TC.

*****

“We got it wrong and sincerely apologize… We launched programs that weren’t ready and we didn’t deliver a MOC program that physicians found meaningful… We want to change that.”          – ABIM President and CEO Richard Baron, MD in an open letter to diplomates

In this month’s upcoming issue of Medical Education, Wenghofer et al. provide us with some additional data about the effects of continuing professional development, specifically highlighting public complaints against physicians. This article is extremely timely given the recent developments in the United States. The American Board of Internal Medicine (ABIM) recently announced that it was going to institute a moratorium on its “Maintenance of Certification (MOC)” program for at least 2 years.

In this study, the researchers attempted to determine if a correlation existed between participation in different types of Continuing Professional Development (CPD) and complaints against physicians. Using the matched cohort analysis, they were able to demonstrate that physicians who participated in CPD were statistically less likely to receive complaints about quality of care in the year prior. There was no correlation regarding complaints overall, complaints related to attitude and communication, or complaints related to professionalism. They also found that physicians who participated in assessment-based CPD rate increased odds of receiving a complaint. The authors note that assessment based programs are often prompted by work-related events, which may explain why this particular type of CPD demonstrates this finding. They also note that older physicians and physicians seeing higher volumes of patients odds of receiving a complaint as well. This calls in the question the role of burnout amongst physicians who received complaints.

With the move to mandatory “lifelong learning” activities and regulatory oversight, there is a need for this type of research. All educators recognize the need for assessments to be both valid and reliable. Organizations that regulate these types of professional development must be held to the same standard, especially since a significant amount of money is at stake for the learners. Critics of the process point to its costliness, leg implementation, and they question its’ clinical relevance.

In the past year I have had the good fortune to study quality improvement from the experts at the Institute for Healthcare Improvement.

One of the key points that we learned is that there are three types of data:
1) data for improvement,
2) data for evaluation, and
3) research.

Data that is collected for one purpose should not be used for another means (i.e. data collected for quality improvement should not be used for evaluation purposes). Unfortunately, we seem to mix purposes far too often. For example, evaluation of patient satisfaction is collected under the guise of quality improvement but is frequently utilized to evaluate physicians performance and even pay. I have similar concerns about regulated lifelong learning activities. The purpose of these activities should be to improve my practice, and ultimately improve my patient outcomes.

QUESTIONS

With these findings and concerns in mind, I’m hoping that the readership of Medical Education would be willing to discuss the following:

Q1: Does the current model of Continuing Professional Development ensure that physicians have the skills needed to manage patients in a 21st-century healthcare system?

Q2: How best should data from these programs be utilized?

Q3: How can we ensure public accountability for health outcomes when financial conflicts of interest exist within regulatory bodies that oversee these programs?

Please help start the conversation by replying below to these questions (and be sure to mark which question your answering by using Q1/cue twos/Q3). Feel free to share this via your preferred social network by using the #MedEdJ.

Conversation Kickstarter | Feedback and the learner

Featured

by Teresa Chan, MD, FRCPC
e-Editor intern, Assistant Professor, McMaster University

In this month’s upcoming issue of The Clinical Teacher, David Boud writes a commissioned paper that provides us with helpful tips and hints regarding feedback.  This article is most definitely a ‘go to’ resource for any health professional who teaches in the clinical setting.

As he highlights in the article,”… [f]eedback in clinical settings, must be characterised
not solely in terms of inputs, but also by the effects that result.”  Particularly, this concept hit home for me.  You see, for the past few years I’ve been hard at work at my institution working with our residency education program to redesign workplace based assessment to emphasize feedback.  This has resulted in the McMaster Modular Assessment Program, and I must say, even after all of the literature I’ve read, Dr. Boud’s paper really synthesized and summarized some really pragmatic tips that I will be taking to the bedside.

As a learner, I was always the pesky learner that asked for feedback… and I recall being quite aggressive in asking for specific ways to improve my burgeoning practice.  Now, as a junior clinician educator that is interested in assessment and feedback, I have lived my life with trying to figure out how to best design a system that creates the opportunities for residents to do the same.

At times, I worry that by being too much of an educational designer, I am removing the agency from the learner, and decreasing the impetus for them to self-direct this feedback. Recently, I have read the book by Stone & Heen which emphasizes the key skill of receiving feedback well (Thanks for the Feedback: The Science and Art of Receiving Feedback Well).

And so, I am wondering if I might engage The Clinical Teacher audience in a discussion around the idea of feedback using these three questions:

  • Q1: Are we ‘babying’ learners these days too much by creating systems that encourage feedback?
  • Q2: Or does the system need to be there to provide a scaffold for learners so that they might one day more fully participate in the feedback experience?
  • Q3: Ultimately, what is the role of the adult health professions learner in the feedback process?

Please drop a line below to reply to these questions (and be sure to mark which question you’re answering using Q1/Q2/Q3).  Feel free to tweet around this using the hashtag #ClinTeach.

Learning about gender on campus: an analysis of the hidden curriculum for medical students

By Ling-Fang Cheng and Hsing-Chen Yang

Link to article

Context

Gender sensitivity is a crucial factor in the provision of quality health care. This paper explores acquired gendered values and attitudes among medical students through an analysis of the hidden curriculum that exists within formal medical classes and informal learning.

Methods

Discourse analysis was adopted as the research method. Data were collected from the Bulletin Board System (BBS), which represented an essential communication platform among students in Taiwan before the era of Facebook. The study examined 197 gender-related postings on the BBS boards of nine of 11 universities with a medical department in Taiwan, over a period of 10 years from 2000 to 2010.

Results

The five distinctive characteristics of the hidden curriculum were as follows: (i) gendered stereotypes of physiological knowledge; (ii) biased treatment of women; (iii) stereotyped gender-based division of labour; (iv) sexual harassment and a hostile environment, and (v) ridiculing of lesbian, gay, bisexual and transgender (LGBT) people. Both teachers and students co-produced a heterosexual masculine culture and sexism, including ‘benevolent sexism’ and ‘hostile sexism’. As a result, the self-esteem and learning opportunities of female and LGBT students have been eroded.

Conclusions

The paper explores gender dynamics in the context of a hidden curriculum in which heterosexual masculinity and stereotyped sexism are prevalent as norms. Both teachers and students, whether through formal medical classes or informal extracurricular interactive activities, are noted to contribute to the consolidation of such norms. The study tentatively suggests three strategies for integrating gender into medical education: (i) by separating physiological knowledge from gender stereotyping in teaching; (ii) by highlighting the importance of gender sensitivity in the language used within and outside the classroom by teachers and students, and (iii) by broadening the horizons of both teachers and students by recounting examples of the lived experiences of those who have been excluded and discriminated against, particularly members of LGBT and other minorities.

DOI: 10.1111/medu.12628

Grades in formative workplace-based assessment: a study of what works for whom and why

By Janet Lefroy, Ashley Hawarden, Simon P Gay, Robert K McKinley and Jennifer Cleland

Link to article.

Context

Grades are commonly used in formative workplace-based assessment (WBA) in medical education and training, but may draw attention away from feedback about the task. A dilemma arises because the self-regulatory focus of a trainee must include self-awareness relative to agreed standards, which implies grading.

Objectives

In this study we aimed to understand the meaning which medical students construct from WBA feedback with and without grades, and what influences this.

Methods

Year 3 students were invited to take part in a randomised crossover study in which each student served as his or her own control. Each student undertook one WBA with and one without grades, and then chose whether or not to be given grades in a third WBA. These preferences were explored in semi-structured interviews. A realist approach to analysis was used to gain understanding of student preferences and the impact of feedback with and without grades.

Results

Of 83 students who were given feedback with and without grades, 65 (78%) then chose to have feedback with grades and 18 (22%) without grades in their third WBA. A total of 24 students were interviewed. Students described how grades locate their performance and calibrate their self-assessment. For some, low grades focused attention and effort. Satisfactory and high grades enhanced self-efficacy.

Conclusions

Grades are concrete, powerful and blunt, can be harmful and need to be explained to help students create helpful meaning from them. Low grades risk reducing self-efficacy in some and may encourage others to focus on proving their ability rather than on improvement. A metaphor of the semi-permeable membrane is introduced to elucidate how students reduced potential negative effects and enhanced the positive effects of feedback with grades by selective filtering and pumping. This study illuminates the complexity of the processing of feedback by its recipients, and informs the use of grading in the provision of more effective, tailored feedback.

DOI: 10.1111/medu.12659

Learning to care for older patients: hospitals and nursing homes as learning environments

By Marije Huls, Sophia E de Rooij, Annemie Diepstraten, Raymond Koopmans and Esther Helmich

Link to article.

Context

A significant challenge facing health care is the ageing of the population, which calls for a major response in medical education. Most clinical learning takes place within hospitals, but nursing homes may also represent suitable learning environments in which students can gain competencies in geriatric medicine.

Objectives

This study explores what students perceive as the main learning outcomes of a geriatric medicine clerkship in a hospital or a nursing home, and explicitly addresses factors that may stimulate or hamper the learning process.

Methods

This qualitative study falls within a constructivist paradigm: it draws on socio-cultural learning theory and is guided by the principles of constructivist grounded theory. There were two phases of data collection. Firstly, a maximum variation sample of 68 students completed a worksheet, giving brief written answers on questions regarding their geriatric medicine clerkships. Secondly, focus group discussions were conducted with 19 purposively sampled students. We used template analysis, iteratively cycling between data collection and analysis, using a constant comparative process.

Results

Students described a broad range of learning outcomes and formative experiences that were largely distinct from their learning in previous clerkships with regard to specific geriatric knowledge, deliberate decision making, end-of-life care, interprofessional collaboration and communication. According to students, the nursing home differed from the hospital in three aspects: interprofessional collaboration was more prominent; the lower resources available in nursing homes stimulated students to be creative, and students reported having greater autonomy in nursing homes compared with the more extensive educational guidance provided in hospitals.

Conclusions

In both hospitals and nursing homes, students not only learn to care for older patients, but also describe various broader learning outcomes necessary to become good doctors. The results of our study, in particular the specific benefits and challenges associated with learning in the nursing home, may further inform the implementation of geriatric medicine clerkships in hospitals and nursing homes.

DOI: 10.1111/medu.12646

Reading between the lines: faculty interpretations of narrative evaluation comments

By Shiphra Ginsburg, Glenn Regehr, Lorelei Lingard and Kevin W Eva

Link to article.

Objectives

Narrative comments are used routinely in many forms of rater-based assessment. Interpretation can be difficult as a result of idiosyncratic writing styles and disconnects between literal and intended meanings. Our purpose was to explore how faculty attendings interpret and make sense of the narrative comments on residents’ in-training evaluation reports (ITERs) and to determine the language cues that appear to be influential in generating and justifying their interpretations.

Methods

A group of 24 internal medicine (IM) faculty attendings each categorised a subgroup of postgraduate year 1 (PGY1) and PGY2 IM residents based solely on ITER comments. They were then interviewed to determine how they had made their judgements. Constant comparative techniques from constructivist grounded theory were used to analyse the interviews and develop a framework to help in understanding how ITER language was interpreted.

Results

The overarching theme of ‘reading between the lines’ explained how participants read and interpreted ITER comments. Scanning for ‘flags’ was part of this strategy. Participants also described specific factors that shaped their judgements, including: consistency of comments; competency domain; specificity; quantity, and context (evaluator identity, rotation type and timing). There were several perceived purposes of ITER comments, including feedback to the resident, summative assessment and other more socially complex objectives.

Conclusions

Participants made inferences based on what they thought evaluators intended by their comments and seemed to share an understanding of a ‘hidden code’. Participants’ ability to ‘read between the lines’ explains how comments can be effectively used to categorise and rank-order residents. However, it also suggests a mechanism whereby variable interpretations can arise. Our findings suggest that current assumptions about the purpose, value and effectiveness of ITER comments may be incomplete. Linguistic pragmatics and politeness theories may shed light on why such an implicit code might evolve and be maintained in clinical evaluation.

DOI: 10.1111/medu.12637

The effect of dyad versus individual simulation-based ultrasound training on skills transfer

By Martin G Tolsgaard, Mette E Madsen, Charlotte Ringsted, Birgitte S Oxlund, Anna Oldenburg, Jette L Sorensen, Bent Ottesen and Ann Tabor

Link to article here.

Context

Dyad practice may be as effective as individual practice during clinical skills training, improve students’ confidence, and reduce costs of training. However, there is little evidence that dyad training is non-inferior to single-student practice in terms of skills transfer.

Objectives

This study was conducted to compare the effectiveness of simulation-based ultrasound training in pairs (dyad practice) with that of training alone (single-student practice) on skills transfer.

Methods

In a non-inferiority trial, 30 ultrasound novices were randomised to dyad (n = 16) or single-student (n = 14) practice. All participants completed a 2-hour training programme on a transvaginal ultrasound simulator. Participants in the dyad group practised together and took turns as the active practitioner, whereas participants in the single group practised alone. Performance improvements were evaluated through pre-, post- and transfer tests. The transfer test involved the assessment of a transvaginal ultrasound scan by one of two clinicians using the Objective Structured Assessment of Ultrasound Skills (OSAUS).

Results

Thirty participants completed the simulation-based training and 24 of these completed the transfer test. Dyad training was found to be non-inferior to single-student training: transfer test OSAUS scores were significantly higher than the pre-specified non-inferiority margin (delta score 7.8%, 95% confidence interval −3.8–19.6%; p = 0.04). More dyad (71.4%) than single (30.0%) trainees achieved OSAUS scores above a pre-established pass/fail level in the transfer test (p = 0.05). There were significant differences in performance scores before and after training in both groups (pre- versus post-test, p < 0.01) with large effect sizes (Cohen’s d = 3.85) and no significant interactions between training type and performance (p = 0.59). The dyad group demonstrated higher training efficiency in terms of simulator score per number of attempts compared with the single-student group (p = 0.03).

Conclusion

Dyad practice improves the efficiency of simulation-based training and is non-inferior to individual practice in terms of skills transfer.

DOI: 10.1111/medu.12624

Evaluating the impact of high- and low-fidelity instruction in the development of auscultation skills

By Ruth Chen, Lawrence E Grierson and Geoffrey R Norman

Link to article.

Context

A principal justification for the use of high-fidelity (HF) simulation is that, because it is closer to reality, students will be more motivated to learn and, consequently, will be better able to transfer their learning to real patients. However, the increased authenticity is accompanied by greater complexity, which may reduce learning, and variability in the presentation of a condition on an HF simulator is typically restricted.

Objectives

This study was conducted to explore the effectiveness of HF and low-fidelity (LF) simulation for learning within the clinical education and practice domains of cardiac and respiratory auscultation and physical assessment skills.

Methods

Senior-level nursing students were randomised to HF and LF instruction groups or to a control group. Primary outcome measures included LF (digital sounds on a computer) and HF (human patient simulator) auscultation tests of cardiac and respiratory sounds, as well as observer-rated performances in simulated clinical scenarios.

Results

On the LF auscultation test, the LF group consistently demonstrated performance comparable or superior to that of the HF group, and both were superior to the performance of the control group. For both HF outcome measures, there was no significant difference in performance between the HF and LF instruction groups.

Conclusions

The results from this study suggest that highly contextualised learning environments may not be uniformly advantageous for instruction and may lead to ineffective learning by increasing extraneous cognitive load in novice learners.

DOI: 10.1111/medu.12653

The effect of continuing professional development on public complaints: a case–control study

By Elizabeth F Wenghofer, Craig Campbell, Bernard Marlow, Sophia M Kam, Lorraine Carter and William McCauley

Link to article here.

Objectives

This study aimed to investigate the relationship between participation in different types of continuing professional development (CPD), and incidences and types of public complaint against physicians.

Methods

Cases included physicians against whom complaints were made by members of the public to the medical regulatory body in Ontario, Canada, the College of Physicians and Surgeons of Ontario (CPSO), during 2008 and 2009. The control cohort included physicians against whom no complaints were documented during the same period. We focused on complaints related to physician communication, quality of care and professionalism. The CPD data included all Royal College of Physicians and Surgeons of Canada (RCPSC) and College of Family Physicians of Canada (CFPC) CPD programme activities reported by the case and control physicians. Multivariate logistic regression models were used to determine if the independent variable, reported participation in CPD, was associated with the dependent variable, the complaints-related status of the physician in the year following reported CPD activities.

Results

A total of 2792 physicians were included in the study. There was a significant relationship between participation in CPD, type of CPD and type of complaint received. Analysis indicated that physicians who reported overall participation in CPD activities were significantly less likely (odds ratio 0.604; p = 0.028) to receive quality of care-related complaints than those who did not report participating in CPD. Additionally, participation in group-based CPD was less likely (OR 0.681; p = 0.041) to result in quality of care-related complaints.

Conclusions

The findings demonstrate a positive relationship between participation in the national CPD programmes of the CFPC and RCPSC, and lower numbers of public complaints received by the CPSO. As certification bodies and regulators alike are increasingly mandating CPD, they are encouraged to continually evaluate the effectiveness of their programmes to maximise programme impact on physician performance at the population level.

DOI: 10.1111/medu.12633

Learning in student-run clinics: a systematic review

By Tim Schutte, Jelle Tichelaar, Ramon S Dekker, Michiel A van Agtmael, Theo P G M de Vries and Milan C Richir

Link to article here.

Context

Student-run clinics (SRCs) have existed for many years and may provide the most realistic setting for context-based learning and legitimate early clinical experiences with responsibility for patient care. We reviewed the literature on student outcomes of participation in SRCs.

Methods

A systematic literature review was performed using the PubMed, EMBASE, PsycINFO and ERIC databases. Included articles were reviewed for conclusions and outcomes; study quality was assessed using the Medical Education Research Study Quality Instrument (MERSQI).

Results

A total of 42 articles met the inclusion criteria and were included in the quantitative synthesis. The effects of participation on students’ attitudes were mainly positive: students valued the SRC experience. Data on the effects of SRC participation on students’ skills and knowledge were based mainly on expert opinions and student surveys. Students reported improved skills and indicated that they had acquired knowledge they were unlikely to have gained elsewhere in the curriculum. The quality of specific aspects of care delivered by students was comparable with that of regular care.

Conclusions

The suggestion that students should be trained as medical professionals with responsibility for patient care early in the curriculum is attractive. In an SRC this responsibility is central. Students valued the early training opportunity in SRCs and liked participating. However, little is known about the effect of SRC participation on students’ skills and knowledge. The quality of care provided by students seemed adequate. Further research is needed to assess the effect of SRC participation on students’ skills, knowledge and behaviour.

DOI: 10.1111/medu.12625

All aflutter with tweets…

We’re happy to let you know that the blog/discussion forum for the two journals Medical Education and The Clinical Teacher now includes a feed to capture the related tweets using the hashtags #MedEdJ for Medical Education and #ClinTeachJ for The Clinical Teacher.

Come and see the discussion in the Twittersphere, and add to the mix at www.mededucconversations.com (Conversations in Medical Education)!