#MedEdJ Conversation Kickstarter | The Effect of Continuing Professional Development

Featured

The following post is by one of our e-Council members, Robert R. Cooney MD, MSMedEd, RDMS, FAAEM, FACEP.  Robert is the Associate Program Director for the Emergency Medicine Residency Program at Conemaugh Memorial Medical Center in Johnstown, PA.  Thanks for tuning in and conversing with us.  – TC.

*****

“We got it wrong and sincerely apologize… We launched programs that weren’t ready and we didn’t deliver a MOC program that physicians found meaningful… We want to change that.”          – ABIM President and CEO Richard Baron, MD in an open letter to diplomates

In this month’s upcoming issue of Medical Education, Wenghofer et al. provide us with some additional data about the effects of continuing professional development, specifically highlighting public complaints against physicians. This article is extremely timely given the recent developments in the United States. The American Board of Internal Medicine (ABIM) recently announced that it was going to institute a moratorium on its “Maintenance of Certification (MOC)” program for at least 2 years.

In this study, the researchers attempted to determine if a correlation existed between participation in different types of Continuing Professional Development (CPD) and complaints against physicians. Using the matched cohort analysis, they were able to demonstrate that physicians who participated in CPD were statistically less likely to receive complaints about quality of care in the year prior. There was no correlation regarding complaints overall, complaints related to attitude and communication, or complaints related to professionalism. They also found that physicians who participated in assessment-based CPD rate increased odds of receiving a complaint. The authors note that assessment based programs are often prompted by work-related events, which may explain why this particular type of CPD demonstrates this finding. They also note that older physicians and physicians seeing higher volumes of patients odds of receiving a complaint as well. This calls in the question the role of burnout amongst physicians who received complaints.

With the move to mandatory “lifelong learning” activities and regulatory oversight, there is a need for this type of research. All educators recognize the need for assessments to be both valid and reliable. Organizations that regulate these types of professional development must be held to the same standard, especially since a significant amount of money is at stake for the learners. Critics of the process point to its costliness, leg implementation, and they question its’ clinical relevance.

In the past year I have had the good fortune to study quality improvement from the experts at the Institute for Healthcare Improvement.

One of the key points that we learned is that there are three types of data:
1) data for improvement,
2) data for evaluation, and
3) research.

Data that is collected for one purpose should not be used for another means (i.e. data collected for quality improvement should not be used for evaluation purposes). Unfortunately, we seem to mix purposes far too often. For example, evaluation of patient satisfaction is collected under the guise of quality improvement but is frequently utilized to evaluate physicians performance and even pay. I have similar concerns about regulated lifelong learning activities. The purpose of these activities should be to improve my practice, and ultimately improve my patient outcomes.

QUESTIONS

With these findings and concerns in mind, I’m hoping that the readership of Medical Education would be willing to discuss the following:

Q1: Does the current model of Continuing Professional Development ensure that physicians have the skills needed to manage patients in a 21st-century healthcare system?

Q2: How best should data from these programs be utilized?

Q3: How can we ensure public accountability for health outcomes when financial conflicts of interest exist within regulatory bodies that oversee these programs?

Please help start the conversation by replying below to these questions (and be sure to mark which question your answering by using Q1/cue twos/Q3). Feel free to share this via your preferred social network by using the #MedEdJ.

Conversation Kickstarter | Feedback and the learner

Featured

by Teresa Chan, MD, FRCPC
e-Editor intern, Assistant Professor, McMaster University

In this month’s upcoming issue of The Clinical Teacher, David Boud writes a commissioned paper that provides us with helpful tips and hints regarding feedback.  This article is most definitely a ‘go to’ resource for any health professional who teaches in the clinical setting.

As he highlights in the article,”… [f]eedback in clinical settings, must be characterised
not solely in terms of inputs, but also by the effects that result.”  Particularly, this concept hit home for me.  You see, for the past few years I’ve been hard at work at my institution working with our residency education program to redesign workplace based assessment to emphasize feedback.  This has resulted in the McMaster Modular Assessment Program, and I must say, even after all of the literature I’ve read, Dr. Boud’s paper really synthesized and summarized some really pragmatic tips that I will be taking to the bedside.

As a learner, I was always the pesky learner that asked for feedback… and I recall being quite aggressive in asking for specific ways to improve my burgeoning practice.  Now, as a junior clinician educator that is interested in assessment and feedback, I have lived my life with trying to figure out how to best design a system that creates the opportunities for residents to do the same.

At times, I worry that by being too much of an educational designer, I am removing the agency from the learner, and decreasing the impetus for them to self-direct this feedback. Recently, I have read the book by Stone & Heen which emphasizes the key skill of receiving feedback well (Thanks for the Feedback: The Science and Art of Receiving Feedback Well).

And so, I am wondering if I might engage The Clinical Teacher audience in a discussion around the idea of feedback using these three questions:

  • Q1: Are we ‘babying’ learners these days too much by creating systems that encourage feedback?
  • Q2: Or does the system need to be there to provide a scaffold for learners so that they might one day more fully participate in the feedback experience?
  • Q3: Ultimately, what is the role of the adult health professions learner in the feedback process?

Please drop a line below to reply to these questions (and be sure to mark which question you’re answering using Q1/Q2/Q3).  Feel free to tweet around this using the hashtag #ClinTeach.

Exploring the impact of workplace cyberbullying on trainee doctors

By Samuel Farley, Iain Coyne, Christine Sprigg, Carolyn Axtell and Ganesh Subramanian.

Link to article here.

Objectives

Workplace bullying is an occupational hazard for trainee doctors. However, little is known about their experiences of cyberbullying at work. This study examines the impact of cyberbullying among trainee doctors, and how attributions of blame for cyberbullying influence individual and work-related outcomes.

Methods

Doctors at over 6 months into training were asked to complete an online survey that included measures of cyberbullying, blame attribution, negative emotion, job satisfaction, interactional justice and mental strain. A total of 158 trainee doctors (104 women, 54 men) completed the survey.

Results

Overall, 73 (46.2%) respondents had experienced at least one act of cyberbullying. Cyberbullying adversely impacted on job satisfaction (β = − 0.19; p < 0.05) and mental strain (β = 0.22; p < 0.001), although attributions of blame for the cyberbullying influenced its impact and the path of mediation. Negative emotion mediated the relationship between self-blame for a cyber-bullying act and mental strain, whereas interactional injustice mediated the association between blaming the perpetrator and job dissatisfaction.

Conclusions

Acts of cyberbullying had been experienced by nearly half of the sample during their training and were found to significantly relate to ill health and job dissatisfaction. The deleterious impact of cyberbullying can be addressed through both workplace policies, and training for trainee doctors and experienced medical professionals.

DOI: 10.1111/medu.12666

‘Sorry, I meant the patient’s left side’: impact of distraction on left–right discrimination

By John McKinley, Martin Dempster and Gerard J Gormley

Link to article here.

Context

Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance.

Objectives

Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left–right (LR) discrimination ability.

Methods

Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants’ LR discrimination ability was measured using the validated Bergen Left–Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants’ performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants’ self-perceived LR discrimination ability and their actual performance.

Results

A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance.

Conclusions

Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual’s ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this.

DOI: 10.1111/medu.12658

The struggling student: a thematic analysis from the self-regulated learning perspective

By Rakesh Patel, Carolyn Tarrant, Sheila Bonas, Janet Yates and John Sandars

Link to article here.

Context

Students who engage in self-regulated learning (SRL) are more likely to achieve academic success compared with students who have deficits in SRL and tend to struggle with academic performance. Understanding how poor SRL affects the response to failure at assessment will inform the development of better remediation.

Methods

Semi-structured interviews were conducted with 55 students who had failed the final re-sit assessment at two medical schools in the UK to explore their use of SRL processes. A thematic analysis approach was used to identify the factors, from an SRL perspective, that prevented students from appropriately and adaptively overcoming failure, and confined them to a cycle of recurrent failure.

Results

Struggling students did not utilise key SRL processes, which caused them to make inappropriate choices of learning strategies for written and clinical formats of assessment, and to use maladaptive strategies for coping with failure. Their normalisation of the experience and external attribution of failure represented barriers to their taking up of formal support and seeking informal help from peers.

Conclusions

This study identified that struggling students had problems with SRL, which caused them to enter a cycle of failure as a result of their limited attempts to access formal and informal support. Implications for how medical schools can create a culture that supports the seeking of help and the development of SRL, and improves remediation for struggling students, are discussed.

DOI: 10.1111/medu.12651

How do surgeons think they learn about communication? A qualitative study

By Nicola Mendick, Bridget Young, Christopher Holcombe and Peter Salmon

Link to article here.

Context

Communication education has become integral to pre- and post-qualification clinical curricula, but it is not informed by research into how practitioners think that good communication arises.

Objectives

This study was conducted to explore how surgeons conceptualise their communication with patients with breast cancer in order to inform the design and delivery of communication curricula.

Methods

We carried out 19 interviews with eight breast surgeons. Each interview centred on a specific consultation with a different patient. We analysed the transcripts of the surgeons’ interviews qualitatively using a constant comparative approach.

Results

All of the surgeons described communication as central to their role. Communication could be learned to some extent, not from formal training, but by selectively incorporating practices they observed in other practitioners and by being mindful in consultations. Surgeons explained that their own values and character shaped how they communicated and what they wanted to achieve, and constrained what could be learned.

Conclusions

These surgeons’ understanding of communication is consistent with recent suggestions that communication education: (i) should place practitioners’ goals at its centre, and (ii) might be enhanced by approaches that support ‘mindful’ practice. By contrast, surgeons’ understanding diverged markedly from the current emphasis on ‘communication skills’. Research that explores practitioners’ perspectives might help educators to design communication curricula that engage practitioners by seeking to enhance their own ways of learning about communication.

DOI: 10.1111/medu.12648

Louder than words: power and conflict in interprofessional education articles

By Elise Paradis and Cynthia R Whitehead

Link to article here.

Context

Interprofessional education (IPE) aspires to enable collaborative practice. Current IPE offerings, although rapidly proliferating, lack evidence of efficacy and theoretical grounding.

Objectives

Our research aimed to explore the historical emergence of the field of IPE and to analyse the positioning of this academic field of inquiry. In particular, we sought to investigate the extent to which power and conflict – elements central to interprofessional care – figure in the IPE literature.

Methods

We used a combination of deductive and inductive automated coding and manual coding to explore the contents of 2191 articles in the IPE literature published between 1954 and 2013. Inductive coding focused on the presence and use of the sociological (rather than statistical) version of power, which refers to hierarchies and asymmetries among the professions. Articles found to be centrally about power were then analysed using content analysis.

Results

Publications on IPE have grown exponentially in the past decade. Deductive coding of identified articles showed an emphasis on students, learning, programmes and practice. Automated inductive coding of titles and abstracts identified 129 articles potentially about power, but manual coding found that only six articles put power and conflict at the centre. Content analysis of these six articles revealed that two provided tentative explorations of power dynamics, one skirted around this issue, and three explicitly theorised and integrated power and conflict.

Conclusions

The lack of attention to power and conflict in the IPE literature suggests that many educators do not foreground these issues. Education programmes are expected to transform individuals into effective collaborators, without heed to structural, organisational and institutional factors. In so doing, current constructions of IPE veil the problems that IPE attempts to solve.

DOI: 10.1111/medu.12668

A critical appraisal of instruments to measure outcomes of interprofessional education

By Matthew Oates and Megan Davidson

Link to article here.

Context

Interprofessional education (IPE) is believed to prepare health professional graduates for successful collaborative practice. A range of instruments have been developed to measure the outcomes of IPE. An understanding of the psychometric properties of these instruments is important if they are to be used to measure the effectiveness of IPE.

Objectives

This review set out to identify instruments available to measure outcomes of IPE and collaborative practice in pre-qualification health professional students and to critically appraise the psychometric properties of validity, responsiveness and reliability against contemporary standards for instrument design.

Methods

Instruments were selected from a pool of extant instruments and subjected to critical appraisal to determine whether they satisfied inclusion criteria. The qualitative and psychometric attributes of the included instruments were appraised using a checklist developed for this review.

Results

Nine instruments were critically appraised, including the widely adopted Readiness for Interprofessional Learning Scale (RIPLS) and the Interdisciplinary Education Perception Scale (IEPS). Validity evidence for instruments was predominantly based on test content and internal structure. Ceiling effects and lack of scale width contribute to the inability of some instruments to detect change in variables of interest. Limited reliability data were reported for two instruments. Scale development and scoring protocols were generally reported by instrument developers, but the inconsistent application of scoring protocols for some instruments was apparent.

Conclusions

A number of instruments have been developed to measure outcomes of IPE in pre-qualification health professional students. Based on reported validity evidence and reliability data, the psychometric integrity of these instruments is limited. The theoretical test construction paradigm on which instruments have been developed may be contributing to the failure of some instruments to detect change in variables of interest following an IPE intervention. These limitations should be considered in any future research on instrument design.

DOI: 10.1111/medu.12681

Team cohesiveness, team size and team performance in team-based learning teams

By Britta M Thompson, Paul Haidet, Nicole J Borges, Lisa R Carchedi, Brenda J B Roman, Mark H Townsend, Agata P Butler, David B Swanson, Michael P Anderson and Ruth E Levine.

Link to article here.

Objectives

The purpose of this study was to explore the relationships among variables associated with teams in team-based learning (TBL) settings and team outcomes.

Methods

We administered the National Board of Medical Examiners (NBME) Psychiatry Subject Test first to individuals and then to teams of Year three students at four medical schools that used TBL in their psychiatry core clerkships. Team cohesion was analysed using the Team Performance Scale (TPS). Bivariate correlation and linear regression analysis were used to analyse the relationships among team-level variables (mean individual TPS scores for each team, mean individual NBME scores of teams, team size, rotation and gender make-up) and team NBME test scores. A hierarchical linear model was used to test the effects of individual TPS and individual NBME test scores within each team, as well as the effects of the team-level variables of team size, team rotation and gender on team NBME test scores. Individual NBME test and TPS scores were nested within teams and treated as subsampling units.

Results

Individual NBME test scores and individual TPS scores were positively and statistically significantly (p < 0.01) associated with team NBME test scores, when team rotation, team size and gender make-up were controlled for. Higher team NBME test scores were associated with teams rotating later in the year and larger teams (p < 0.01). Gender make-up was not significantly associated.

Conclusions

The results of an NBME Psychiatry Subject Test administered to TBL teams at four medical schools suggest that larger teams on later rotations score higher on a team NBME test. Individual NBME test scores and team cohesion were positively and significantly associated with team NBME test scores. These results suggest the need for additional studies focusing on team outcomes, team cohesion, team size, rotation and other factors as they relate to the effective and efficient performance of TBL teams in health science education.

DOI: 10.1111/medu.12636

Self-regulated learning in simulation-based training: a systematic review and meta-analysis

By Ryan Brydges, Julian Manzone, David Shanks, Rose Hatala, Stanley J Hamstra, Benjamin Zendejas and David A Cook

Link to the article here.

Context

Self-regulated learning (SRL) requires an active learner who has developed a set of processes for managing the achievement of learning goals. Simulation-based training is one context in which trainees can safely practise learning how to learn.

Objectives

The purpose of the present study was to evaluate, in the simulation-based training context, the effectiveness of interventions designed to support trainees in SRL activities. We used the social-cognitive model of SRL to guide a systematic review and meta-analysis exploring the links between instructor supervision, supports or scaffolds for SRL, and educational outcomes.

Methods

We searched databases including MEDLINE and Scopus, and previous reviews, for material published until December 2011. Studies comparing simulation-based SRL interventions with another intervention for teaching health professionals were included. Reviewers worked independently and in duplicate to extract information on learners, study quality and educational outcomes. We used random-effects meta-analysis to compare the effects of supervision (instructor present or absent) and SRL educational supports (e.g. goal-setting study guides present or absent).

Results

From 11 064 articles, we included 32 studies enrolling 2482 trainees. Only eight of the 32 studies included educational supports for SRL. Compared with instructor-supervised interventions, unsupervised interventions were associated with poorer immediate post-test outcomes (pooled effect size: −0.34, p = 0.09; = 19 studies) and negligible effects on delayed (i.e. > 1 week) retention tests (pooled effect size: 0.11, p = 0.63; = 8 studies). Interventions including SRL supports were associated with small benefits compared with interventions without supports on both immediate post-tests (pooled effect size: 0.23, p = 0.22; n = 5 studies) and delayed retention tests (pooled effect size: 0.44, p = 0.067; n = 3 studies).

Conclusions

Few studies in the simulation literature have designed SRL training to explicitly support trainees’ capacity to self-regulate their learning. We recommend that educators and researchers shift from thinking about SRL as learning alone to thinking of SRL as comprising a shared responsibility between the trainee and the instructional designer (i.e. learning using designed supports that help prepare individuals for future learning).

DOI: 10.1111/medu.12649

Towards socio-material approaches in simulation-based education: lessons from complexity theory

By Tara Fenwick and Madeleine Abrandt Dahlgren

Link to Article Here

Context

Review studies of simulation-based education (SBE) consistently point out that theory-driven research is lacking. The literature to date is dominated by discourses of fidelity and authenticity – creating the ‘real’ – with a strong focus on the developing of clinical procedural skills. Little of this writing incorporates the theory and research proliferating in professional studies more broadly, which show how professional learning is embodied, relational and situated in social – material relations. A key concern for medical educators concerns how to better prepare students for the unpredictable and dynamic ambiguity of professional practice; this has stimulated the movement towards socio-material theories in education that address precisely this question.

Objectives and Methods

Among the various socio-material theories that are informing new developments in professional education, complexity theory has been of particular importance for medical educators interested in updating current practices. This paper outlines key elements of complexity theory, illustrated with examples from empirical study, to argue its particular relevance for improving SBE.

Results

Complexity theory can make visible important material dynamics, and their problematic consequences, that are not often noticed in simulated experiences in medical training. It also offers conceptual tools that can be put to practical use. This paper focuses on concepts of emergence, attunement, disturbance and experimentation. These suggest useful new approaches for designing simulated settings and scenarios, and for effective pedagogies before, during and following simulation sessions.

Conclusions

Socio-material approaches such as complexity theory are spreading through research and practice in many aspects of professional education across disciplines. Here, we argue for the transformative potential of complexity theory in medical education using simulation as our focus. Complexity tools open questions about the socio-material contradictions inherent in SBE, draw attention to important material dynamics of emergence, and suggest practical educative ways to expand and deepen student learning.

DOI: 10.1111/medu.12638