Conversation Kickstarter #MedEdJ: When economics and medical education meet – what do trainees value?

The following is a guest post by e-Council member Ellie Hothersall, BSc, MBChB, MPH, MD, FRAI, FFPH – UK

Regarding the following article in the February issue of Medical Education:

When economics and medical education meet – what do trainees value?

I should start this by declaring an interest – I have briefly studied health economics at HERU [], one of the two institutions involved in this paper, so perhaps it’s not surprising I thought this was a suitable paper for commentary. That said, surely you have to be very hard hearted not to be interested in a combination of (health) economics and medical education? Perhaps not.

So, having outed myself as a geek, why should anyone else care about this work?

First, perhaps a quick summary of this kind of work, since it’s probably pretty far from most medical educators’ frame of reference: a discrete choice experiment (DCE) is where participants are asked to undertake a sort of thought experiment where they are asked to choose between a series of slightly imperfect options, normally paired like Top Trumps[] and shown in Figure 2 of the paper.  Analysis then determines what influences decision-making. A financial element is usually included to allow calculation of “willingness to pay”, allowing a monetary value to be attached to the different factors. This process allows the identification of both “push” factors (which drive people away) and “pull” factors. As the authors point out, very little investigation of these topics has been done, and what there is has been specific to recruitment into a specialty. This work is unique and I would argue vital because it looks at wider and more generic training considerations.

This study asked UK junior doctors whether they would trade income for different characteristics, identified as being likely to be the key determinants:

  • Familiarity with hospital or unit
  • Geographical location
  • Opportunities for partner/spouse
  • Potential earnings
  • Clinical/academic reputation
  • Working conditions

“Good working conditions” had the greatest influence on selection of a training position – trainees would seek nearly 50% increase in salary to accept a move from “good” to “poor” (Note that neither term was defined). Interestingly, this was more important that good opportunities for spouse/partner (38%), and both of these were more influential than location (31%). (Opportunities for spouse/partner were more important for female trainees than male ones, in case there was any illusion that gender parity had been reached in medical education). There were predictable interactions with age, dependents etc, but the overall pattern wasn’t really changed.

In the light of on-going strikes over pay and conditions for junior doctors in England [], does this help to reassure the reader that doctors are likely to put working conditions before pay? Or conversely, should it be a warning to policy makers that doctors put a very high value on working conditions?

In discrete choice experiments, there are always respondents who respond inconsistently, or who don’t answer – in the case of this research I find myself wondering if this could translate into people who might leave training rather than accept these unpalatable offers? Even accepting that this is not the case for all of them, I finished the paper wondering about the implications of this for the workforce – in the experiment it’s just a box to tick, but actually this is the choice we offer doctors in training all the time. In real life, is the equivalent action to find a different career?

Conversation questions:

What other economic techniques could and should we be applying to medical education?

Is this pattern of preferences consistent with your own experience? Is this likely to be a UK-specific phenomenon or world-wide?

Can we use this research to alter the problems seen with recruitment in some apparently less attractive parts of the country? Similarly, can we use it to influence choice of career, to attract more trainees to less popular specialities?

Does this add anything to the work on spouses of doctors (e.g.

Purple Sticker Pilot Project #1: Interprofessional Education (IPE)

What follows is the first in a series of posts that form the conclusion of the Purple Sticker Pilot Project, an initiative designed to highlight hot topics of discussion at the 2015 ASME Annual Scientific Meeting and continue the conversation after the event. This blog post contains the first of three abstracts for presentations that generated extensive discussion and engagement at the conference; you will then find a number of links to related journal articles from Medical Education and The Clinical Teacher that provide a basis for further reading around the topic. These articles are all free to access for the next month or so. Please have a look through the blog and related articles then share your thoughts, either in the comments section below or on Twitter.

Exploring the nature of interprofessional education (IPE) provided in education and training
programmes in the United Kingdom. ASME Annual Scientific Conference 2015.
A Morris, C Bücher, K Maddock, R Meredith, A Pooler, SM Shardlow. Contact: 

Interprofessional education (IPE) is a core part of the education of medical students and helps prepare students to engage in multi-professional team-working. This is clearly indicated as an undergraduate training outcome by the General Medical Council in Tomorrow’s Doctors (2009) and in Good Medical Practice (2013). IPE has been defined by the centre for the advancement of IPE (CAIPE) as occurring “when two or more professions learn with, from and about each other to improve collaboration and the quality of care” (2013). It is therefore vital the multiple professional regulatory bodies have similar requirements for engaging their students with IPE to ensure that this process occurs homogeneously. There is not currently a consistent framework for IPE within the organisations regulated by the Health Care Professions Council (HCPC). As a result this study was commissioned by the HCPC to review the current IPE provision within their professions and to provide insight from current good practise as a basis for future recommendations.

Data have been collected through a variety of methods: There has been a systematic literature review of current literature relating to provision of IPE within HCPC registered professions. Telephone interviews with 20 key selected individuals with relevant experience of IPE provision within HCPC and higher education institutions (HEIs). An online survey of all HCPC registered training providers (n=250). Case studies in five organisations identified as having good, established IPE practise have also been undertaken.

Analyses are being conducted using SPSS and NVivo, both to gain an improved understanding of the extent and nature of IPE and also to identify and analyse different types of IPE activities in education and training. The study has also explored the nature of evidence about the impact of IPE. As part of the research, a taxonomy of IPE has been developed and will be discussed.

The presentation will aim to show how the implications of this research will provide a central knowledge base from which IPE in undergraduate medical, HCPC and other professional education will benefit.

Related Articles

Hardisty, J., Scott, L., Chandler, S., Pearson, P. and Powell, S. (2014), Interprofessional learning for medication safety. The Clinical Teacher, 11: 290–296. Available from:

Joseph, S., Diack, L., Garton, F. and Haxton, J. (2012), Interprofessional education in practice. The Clinical Teacher, 9: 27–31. Available from:

Newton, C., Wood, V. and Nasmith, L. (2012), Building capacity for interprofessional practice. The Clinical Teacher, 9: 94–98. Available from:

Paradis, E. and Whitehead, CR. (2015), Louder than words: power and conflict in interprofessional education articles, 1954–2013. Medical Education, 49: 399-407. Available from:  

Stewart, M., Kennedy, N. and Cuene-Grandidier, H. (2010), Undergraduate interprofessional education using high-fidelity paediatric simulation. The Clinical Teacher, 7: 90–96. Available from:

Thistlethwaite, J. (2012), Interprofessional education: a review of context, learning and the research agenda. Medical Education, 46: 58-70. Available from :

Further abstracts from the conference area available at this link.


Conversation Kickstarter #MedEdJ: Collaborative Learning: Hope or Hype for Medical Education?

The following is a guest post by e-Council member Robert Cooney MD, MSMedEd, RDMS, FAAEM, FACEP

Collaborative Learning: Hope or Hype for Medical Education?

There appears to be a trend within medical education to move away from the traditional lecture-based teaching format and moving classroom work towards small group activities, such as problem-based learning, team-based learning, and various blended formats.  As this occurs, both teachers and learners must identify new skills to make the experience worthwhile.  Collaboration is one such skill.  In fact, it makes up one of the “4C’s:” creativity, critical thinking, communication, and collaboration that are frequently cited as skills that must be developed for students to be successful.  While evidence exists to support collaborative learning in simulated settings, it’s use in teaching and learning of clinical skills is less well understood.

In this month’s issue of Medical Education, Tolsgaard, Kulasegaram, and Ringsted set out to provide an overview of collaborative learning of clinical skills in health professions education.  Initially, they provide an overview of the pedagogical basis of collaborative learning through the lens of the Social Interactive Perspective, the motor-skills learning perspective, and the cognitive perspective.  Once they have provided the overview, they introduce several hypotheses (Page 73):

  1. Outcomes of collaborative learning of clinical skills depend on the quality of interaction between learners and the level of positive interdependence
  2. Positive effects on learning are mediated through improved self-efficacy, motivation, and social support
  3. Beneficial effects of collaborative learning of clinical skills rely on action imitation and understanding gained through interchanging periods of observation and actual hands-on experience
  4. Reduced hands-on experience may impair clinical skills automaticity during later stages of learning
  5. Collaborative learning of clinical skills is effective for complex learning tasks or skills characterized by high cognitive load, but less effective or ineffective for simple skills associated with low cognitive load
  6. Interaction between learners enables scaffolding and cognitive co-construction, but primarily in cases in which processing can be communicated and observed

In their final section, the authors again explored the medical education literature the review the empirical evidence to support or refute the above hypotheses and were thus able to develop the following mid-level theoretical model (page 76):

“In this model, we adopted the notion that the effectiveness of collaborative learning of clinical skills decreases with time on task and that learning outcomes are affected by several factors. Factors that may affect learning positively include improved self-efficacy and confidence, and the benefits derived from observation, cognitive co-construction, and scaffolding.  By contrast, reductions in hands-on experience, as well as task processing that cannot be observed or communicated, may impair the effects or collaborative learning of clinical skills.”

So what are the implications of this theory for our day-to-day teaching?  Consider the following:

  1. With the reduction in resident work hours limiting the possible cases for learners, is it possible to use collaborative learning to accelerate clinical skills acquisition? Why or why not? If so, how?
  2. Is it possible to provide cognitive scaffolding for clinical without relying on collaborative learning?
  3. The authors note that automaticity of clinical skills is expected during learning, however, this contrasts with K. Anders Ericsson’s Deliberate Practice in that automaticity represents arrested development. Is it possible that allowing more time within collaborative peer groups would promote continued development using deliberate practice?


Tolsgaard, M. G., Kulasegaram, K. M., & Ringsted, C. V. (2016). Collaborative learning of clinical skills in health professions education: the why, how, when and for whom. Medical Education, 50(1), 69-78.

Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic medicine, 79(10), S70-S81.

Announcing upcoming celebration of 50 years of Medical Education #MedEdJ

This post is a reprint from the introduction written by the Editor-in-Chief to the new web content celebrating 50 years of the journal Medical Education, located at
Gold. It’s a standard many try to achieve. It’s a marker of excellence, a symbol of
wisdom, and it’s considered to radiate vitality. Therefore, it’s a fitting colour for the
anniversary Medical Education is proud to celebrate in 2016. Published as the
British Journal of Medical Education for its first 9 volumes (and 10 years), a
landmark moment like the 50th volume provides a valuable opportunity to take stock.
Flipping through the archives, it is clear that Medical Education has published some
truly outstanding and seminal work. Of course, not every claim has stood the test of
time. There is danger though in treating historical studies or approaches to research
simply as amusing naïvety. I see every piece of work that has been published within
our journal as stepping stones that influenced where we stand today just as our
current work offers stepping stones for where the field will move over the next 50
volumes. Health professional education would be a very uninspiring place if every
idea put forward was guaranteed to withstand the sand of the hourglass.
Throughout 2016 we will offer many opportunities to revisit the history of Medical
Education through the print journal and electronic resources such as this PDF. In the
pages that follow we have gathered information on seminal moments, key statistics,
and on the people who built the field into what it is today. It was a considerable
challenge to pull this information together. That challenge was met only through the
efforts of Rosie Hutchinson, Neil Burling and Louise Corless at Wiley, James Arnold
at Bigtop Design, and Sue Symons, our Editorial Manager. Their effort is our reward,
however, as it led to an enlightening and unique resource. We hope you enjoy using
your mouse to take a scroll through history and we look forward to your contributions
to the next 50 volumes.
You can also watch our anniversary video here.
Kevin W. Eva
Editor in Chief

Conversation Kickstarter #MedEdJ: Bias and experience: How persistent are your innate attitudes?

The following is a guest post by e-Council member Helen Wozniak, Academic Director (e-Learning, Evaluation and Research) for Flinders University in the Northern Territory.

As much as we don’t wish to admit it and try to adjust for it, our unconscious biases may influence our professional behaviour.

The complex issue of prejudice and stereotyping is an area investigated by Galli et al., (2015) in the research paper Don’t look at my wheelchair! The plasticity of a long-lasting prejudice,” published in the December issue of Medical Education. In their research the investigators explored attitudes to wheelchair users with three groups: wheelchair users, health professionals (physical therapists) working with spinal cord injury patients using wheelchairs, and age and gender matched healthy individuals without experience with wheelchair users. Not surprisingly the health professionals showed a more positive attitude towards wheelchair users, with a correlation between positive attitudes and increasing years of experience working with spinal cord injury patients. Of particular interest was that wheelchair users while having positive explicit attitudes to their own group still had negative implicit attitudes about wheelchair users no matter how long they had been a wheelchair user themselves.

The researchers then implemented two different interventions with healthy individuals (with no prior experience with wheelchair users) to establish if exposure to wheelchair users influenced biases towards them. The first intervention involved spending 45 minutes interacting with the wheelchair user as they carried out their daily activities (active intervention). A second group of participants viewed a video describing the life of a wheelchair user (passive intervention). Before the intervention both groups showed the same implicit wheelchair bias, however, after the intervention the bias of those who participated in the passive intervention did not change, while those who engaged actively with the wheelchair user showed a marked reduction in wheelchair bias. These results could be considered a confirmation of experiential learning theory,1,2 which explains how real world experiences can transform personal meanings. So what are the implications for the education of health professionals?

Consider these questions:

  1. In the research direct experience with a wheelchair user (both long-term in the case of physical therapists and a short 45 mins of interaction in the case of other individuals) reduced bias towards wheelchair users. Why? Is there an explanation other than experiential learning theory? Where does that leave passive forms of education? Should we abandon such teaching approaches in favour of immersive educational experiences that could change our students’ implicit biases?
  2. Is it enough to promote a short experience, such as the active intervention described in this research? How do we know if these experiences lead to longer-term changes?
  3. Would the same results emerge if less obvious biases, such as gender, race or social disadvantage, are studied in the same way?
  4. What does your school / clinical environment do to encourage your students / fellow workers and yourself to recognise and confront such biases? It could be interesting to ask students to take a similar test to that used in the research (the Implicit Association Test) and discuss their results:


  1. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall.
  2. Maudsley, G. & Strivens, J (2000). Promoting professional knowledge, experiential learning and critical thinking. Medical Education, 34(7) 535-544.

A date for your diaries: The Clinical Teacher (#ClinTeach) Virtual Journal Club

In December 2015 we will be running a 24 hour Twitter-based virtual journal club (VJC) discussing the following paper in The Clinical Teacher:

Billett, S. (2015), Readiness and learning in health care education. The Clinical Teacher. doi: 10.1111/tct.12477

The publisher has made this article available for free for a limited time so you can read it without requiring a personal or institutional journal subscription.

The article outlines the concept of learner readiness as it applies to health care education, including its clinical aspects.

How the Virtual Journal Club will work:

The over-riding aim is to facilitate discussion about the issues addressed within the paper. Similar to previous journal clubs for its sister journal, Medical Education, we at The Clinical Teacher invite you, the participants, to discuss the questions below in the blog comments in order to get the discussion going prior to the event itself. You can simply ‘comment’ on this post.

The VJC will begin via Twitter at 08:00 Australian EST on 15 Dec 2015, (22:00 London GMT on 14 Dec, 17:00 American EST on 14 Dec.) During that time, the author (@StephenBillett3) will be checking the discussion thread periodically in order to respond and discuss further. The Twitter hashtag for the discussion will be #ClinTeachJC – include it in all your tweets to join the discussion, or simply follow the hashtag to see the conversation as it evolves. You can either discuss your own/other peoples’ answers to the questions below, or just pose any questions you may have to the author or the discussion participants.

We hope to see you online in December. If you’re unable to make the VJC then feel free to catch up/join in with the discussion afterwards; just search #ClinTeachJC on Twitter. We will also be posting a summary of the Twitter discussion back to this blog for further comments.


  1. If you have experienced a lack of readiness for a work task, what were its consequences for your work?
  2. How can we assess readiness?
  3. How can we promote readiness for practice for students and junior practitioners in your field?
  4. Developing conceptual, procedural and dispositional readiness likely takes different forms. What has been your experience of these forms of development?

Virtual Journal Club #ClinTeach

In an article in the recent issue of The Clinical Teacher, Oliphant et al. describe their experience with a Virtual Journal Club (VJC). The authors include some nice Kirkpatrick[i] level 1 evidence of effectiveness. Hopefully, this will give some of you (blog readers, clinical teachers and medical researchers) some ideas on how to approach evaluating an upcoming Virtual Journal Club to be hosted by The Clinical Teacher on 14-15 December, 2015. There are many opportunities for scholarship in this area and era of ‘e-everything’, and the ASME/Wiley journals Medical Education and The Clinical Teacher not only aim to facilitate the reporting and sharing of the science and practices in health professions education, but also to provide opportunities for novel contributions to the field(s). I encourage you to participate in the upcoming Virtual Journal Club and, if interested, evaluate its effectiveness. You can reply to this blog post to express your interest and potentially discover a collaborator to perform some inter-institutional and possibly international collaborative work.

[i] Kirkpatrick, D.L. & Kirkpatrick J.L. (2006). Evaluating training programs (3rd ed.). San Francisco: CA. Berrett-Koehler Publishers, Inc.

Conversation Kickstarter #ClinTeach: Just what is common sense? How do we help our students’ to develop it?

Catherine Haines, Assistant Professor of Medical Education at the University of Nottingham Medical School contributes our conversation kickstarter guest blog post.

Two articles in the current issue of The Clinical Teacher (#ClinTeach) explore how we may help learners to gain more from their clinical experiences and got me thinking about common sense. We often take it for granted, and often bemoan its absence. What is common sense for learners in a clinical environment? How can we make it more common among our learners and trainees?

Common sense is defined by as ‘sound practical judgment that is independent of specialised knowledge, training, or the like; normal native intelligence’

We certainly select learners who have high intelligence. We give them specialised knowledge and training, but how do we support the development of ‘sound practical judgment’? Where does it end? Do we expect our learners to demonstrate sound practical judgement in all areas of life? Certainly, I am still challenged on occasion.

Consider the following key domains where common sense or practical judgment certainly impacts on work performance:

Are you always:

  • Able to monitor your surroundings and take appropriate action to keep safe? (Weather, other people’s needs and moods, calculate risks and dangers etc.)
  • Able to live within the limitations of your own body – stress, sleep, exercise, self-care, good nutrition?
  • Able to mend and maintain necessary equipment (including your home, transport)?
  • Able to analyse and avoid repeating mistakes and predict and avoid possible negative consequences?
  • Able to plan and manage resources, such as money and time?

Let’s stick with the first two: monitoring your surroundings and keeping safe and managing the limitations of your own body, especially in responding to stress and distractions.

In Innovative teaching in situation awareness, Gregory, Hogg and Ker describe an early, low-cost intervention which seeks to develop learners’ skills early in their first year. Groups of ten medical students worked together in an empty hospital bay to gather information, interpret it and anticipate future consequences. As beginner Sherlock Holmes’, they investigated four different scenarios, and identified hazards and cues. Slippers by the bed? A possible trip hazard for some mobility levels. A medicine prescription chart? What can we tell about this patient? Are they acutely unwell? A tutor helped draw out interpretations and encouraged learners to formulate future consequences.

I think this has potential. Early exposure to deducing and applying observations from the clinical environment could prime learners to become more proactive. Portfolios do encourage learners to seek certain types of experience, but sometimes fall short of encouraging deeper processing. Students often only see the point of the reflective cycle later in their career.

Do you think a novice can be trained to develop situational awareness or can that only come with experience? How else could we continue this?

In Student views of stressful simulated ward rounds, Ian Thomas considers a high fidelity simulated ward round for final year medical students; and then sees what happens when he adds more stress! The ‘patients’ and their surroundings simulated an impressive array of potentially dangerous outcomes for learners to trap and mitigate as they conducted appropriate clinical tasks. In addition, six time-critical interruptions were introduced: a ward radio, a doctor’s pager, a phone call, a distressed relative, a cleaner doing the vacuuming and an ad-hoc prescription task. Which do you think the students found the most distracting? I’ll leave it to you to read the article and consider how common sense and experience helps us manage distractions and stress and so enhance safety.

For how long will a novice inevitably experience stress in an unfamiliar clinical environment? How could we encourage learners to become more resilient and able to rise above stress?

On reflection, perhaps common sense isn’t so common after all.

Conversation Kickstarter #MedEdJ: What do you do for your Flipped Classroom?

The following is a guest post by Dr Teresa Chan, Assistant Professor at McMaster University.

In the paper by Khanova et al. (2015) in the October issue of Medical Education, investigators explored student perceptions of learning experiences across multiple flipped classroom activities.

From my personal experiences, I think that the flipped classroom model makes sense – pushing content-delivery phase to preparation work, this ensures that students might be better prepared to optimally take advantage of the classroom time wherein they might best interact with the teacher.

In this paper they raise the issue around the importance of the “active” proportion of the flipped classroom model. Multiple students noted in this study that the flipped classroom study simply acknowledged that the concept was good, but the strength of the educational experience is dependent on the implementation of the classroom-phase of this material. Some have previously accused the flipped classroom model to be the result of poor teaching practices in our existing medical education culture, but I think that the selection of a worthwhile and enriching experience for the classroom phase of a flipped classroom can augment learning substantively. Aligned with Kolb’s theory on experiential learning, creating a good quality experience for learners is important, as it has the potential to enhance engagement, which in turn often assists with learning.

Methods used locally

In my years as a senior resident and now as a junior faculty member, I have used the flipped classroom a number of different times. My colleagues have also used these techniques increasingly. At McMaster University’s Emergency Medicine residency, we have used the following strategies in the past few years:

  • Group activities – creation of teaching materials by students (e.g. podcast scripting, vodcast scripting)
  • Group debate / discussion
  • Case-based discussions
  • Cooperative learning activity with discussion and debrief
  • High fidelity simulation
  • Mass Casualty simulation (i.e. Large Scale simulation with >30 patients and >15 learners)
  • Table top simulation exercises
  • “Pimp the expert” where the students get to ask the teacher questions based on the prep work to clarify

Stella Yiu (creator of the FlippedEMclassroom) has also written about this previously on a medical education-related blog providing her spin on materials that might be useful for those planning a flipped classroom activity.

The Wisdom of the #MedEdJ community: A Crowdsourcing Opportunity

I’m wondering if those in the journals Medical Education and The Clinical Teacher blog community might be interested in sharing their lessons learned about the flipped classroom model.

  • Teachers – Have you run a flipped classroom experience? If so, what did you plan?
  • Students – Have you been subject to one? If so, were your problems similar to those as listed by the students in Khanova’s paper?

Please join us in sharing your experiences with our virtual community of practice.