Conversation Kickstarter #ClinTeach: Just what is common sense? How do we help our students’ to develop it?

Catherine Haines, Assistant Professor of Medical Education at the University of Nottingham Medical School contributes our conversation kickstarter guest blog post.

Two articles in the current issue of The Clinical Teacher (#ClinTeach) explore how we may help learners to gain more from their clinical experiences and got me thinking about common sense. We often take it for granted, and often bemoan its absence. What is common sense for learners in a clinical environment? How can we make it more common among our learners and trainees?

Common sense is defined by as ‘sound practical judgment that is independent of specialised knowledge, training, or the like; normal native intelligence’

We certainly select learners who have high intelligence. We give them specialised knowledge and training, but how do we support the development of ‘sound practical judgment’? Where does it end? Do we expect our learners to demonstrate sound practical judgement in all areas of life? Certainly, I am still challenged on occasion.

Consider the following key domains where common sense or practical judgment certainly impacts on work performance:

Are you always:

  • Able to monitor your surroundings and take appropriate action to keep safe? (Weather, other people’s needs and moods, calculate risks and dangers etc.)
  • Able to live within the limitations of your own body – stress, sleep, exercise, self-care, good nutrition?
  • Able to mend and maintain necessary equipment (including your home, transport)?
  • Able to analyse and avoid repeating mistakes and predict and avoid possible negative consequences?
  • Able to plan and manage resources, such as money and time?

Let’s stick with the first two: monitoring your surroundings and keeping safe and managing the limitations of your own body, especially in responding to stress and distractions.

In Innovative teaching in situation awareness, Gregory, Hogg and Ker describe an early, low-cost intervention which seeks to develop learners’ skills early in their first year. Groups of ten medical students worked together in an empty hospital bay to gather information, interpret it and anticipate future consequences. As beginner Sherlock Holmes’, they investigated four different scenarios, and identified hazards and cues. Slippers by the bed? A possible trip hazard for some mobility levels. A medicine prescription chart? What can we tell about this patient? Are they acutely unwell? A tutor helped draw out interpretations and encouraged learners to formulate future consequences.

I think this has potential. Early exposure to deducing and applying observations from the clinical environment could prime learners to become more proactive. Portfolios do encourage learners to seek certain types of experience, but sometimes fall short of encouraging deeper processing. Students often only see the point of the reflective cycle later in their career.

Do you think a novice can be trained to develop situational awareness or can that only come with experience? How else could we continue this?

In Student views of stressful simulated ward rounds, Ian Thomas considers a high fidelity simulated ward round for final year medical students; and then sees what happens when he adds more stress! The ‘patients’ and their surroundings simulated an impressive array of potentially dangerous outcomes for learners to trap and mitigate as they conducted appropriate clinical tasks. In addition, six time-critical interruptions were introduced: a ward radio, a doctor’s pager, a phone call, a distressed relative, a cleaner doing the vacuuming and an ad-hoc prescription task. Which do you think the students found the most distracting? I’ll leave it to you to read the article and consider how common sense and experience helps us manage distractions and stress and so enhance safety.

For how long will a novice inevitably experience stress in an unfamiliar clinical environment? How could we encourage learners to become more resilient and able to rise above stress?

On reflection, perhaps common sense isn’t so common after all.

Conversation Kickstarter #MedEdJ: What do you do for your Flipped Classroom?

The following is a guest post by Dr Teresa Chan, Assistant Professor at McMaster University.

In the paper by Khanova et al. (2015) in the October issue of Medical Education, investigators explored student perceptions of learning experiences across multiple flipped classroom activities.

From my personal experiences, I think that the flipped classroom model makes sense – pushing content-delivery phase to preparation work, this ensures that students might be better prepared to optimally take advantage of the classroom time wherein they might best interact with the teacher.

In this paper they raise the issue around the importance of the “active” proportion of the flipped classroom model. Multiple students noted in this study that the flipped classroom study simply acknowledged that the concept was good, but the strength of the educational experience is dependent on the implementation of the classroom-phase of this material. Some have previously accused the flipped classroom model to be the result of poor teaching practices in our existing medical education culture, but I think that the selection of a worthwhile and enriching experience for the classroom phase of a flipped classroom can augment learning substantively. Aligned with Kolb’s theory on experiential learning, creating a good quality experience for learners is important, as it has the potential to enhance engagement, which in turn often assists with learning.

Methods used locally

In my years as a senior resident and now as a junior faculty member, I have used the flipped classroom a number of different times. My colleagues have also used these techniques increasingly. At McMaster University’s Emergency Medicine residency, we have used the following strategies in the past few years:

  • Group activities – creation of teaching materials by students (e.g. podcast scripting, vodcast scripting)
  • Group debate / discussion
  • Case-based discussions
  • Cooperative learning activity with discussion and debrief
  • High fidelity simulation
  • Mass Casualty simulation (i.e. Large Scale simulation with >30 patients and >15 learners)
  • Table top simulation exercises
  • “Pimp the expert” where the students get to ask the teacher questions based on the prep work to clarify

Stella Yiu (creator of the FlippedEMclassroom) has also written about this previously on a medical education-related blog providing her spin on materials that might be useful for those planning a flipped classroom activity.

The Wisdom of the #MedEdJ community: A Crowdsourcing Opportunity

I’m wondering if those in the journals Medical Education and The Clinical Teacher blog community might be interested in sharing their lessons learned about the flipped classroom model.

  • Teachers – Have you run a flipped classroom experience? If so, what did you plan?
  • Students – Have you been subject to one? If so, were your problems similar to those as listed by the students in Khanova’s paper?

Please join us in sharing your experiences with our virtual community of practice.

Kudos: A new way for authors in Medical Education and The Clinical Teacher to increase readership and impact

The following is a guest blog post from Rosie Hutchinson, Senior Journals Publishing Manager at Wiley.

Kudos is a web-based service that helps authors explain, enrich and share their published work for greater readership and improved impact. It also provides direct access to a publication dashboard, enabling authors to measure the effects of their actions across a wide range of metrics.

Wiley’s recent partnership with Kudos now renders the service free for all authors publishing in #MedEdJ and #ClinTeach. Registering with Wiley Author Services and opting into the mailing list will streamline the process. Notification emails sent to those registered with Wiley Author Services contain a link for them to “claim” their article in Kudos. Authors who are either unregistered or published their article prior to 2014 can claim authorship in Kudos by searching for articles by DOI, article title or author name.

Once authors have claimed their articles, they are led through the following steps: 

  • Explain articles by adding lay summaries and highlighting what makes the work important. 
  • Enrich articles by adding links to related resources that put the research into context. 
  • Share via email and social media, while Kudos links across search engines and subject indexes

All of this produces many benefits for authors with a very limited time investment required:

  • Discoverability and Impact – Increases the likelihood of their articles being found, read, and cited.
  • Publication Metrics – Provides direct access to usage, citations, and altmetrics for their articles.
  • Networking – Encourages interactions that build relationships and visibility within their communities.

You can have a look at this introductory video and blog post for more info.

Conversation Kick-starter #MedEdJ: A curious case of the phantom professor

The following is a Conversation Kick-starter guest blog post contributed by e-Council member, Dr. Ellie Hothersall pertaining to the article in the September 2015 issue of Medical Education :

A curious case of the phantom professor: mindless teaching evaluations by medical students

When I attend a teaching session of any kind, I try to be diligent with feedback. In fact, I try to give feedback whenever it’s asked for, although in these days of online shopping it’s all getting a bit much. However, there are times when I perhaps pay less attention than I should, particularly if the session was unremarkable, and maybe I too have been guilty of ticking 4/5 for everything and running out of the room. I know I worry that other people do that to me.

Now my anxiety is justified.

I do not know how Uijtdehaage and O’Neal came up with their ingenious study idea, but it certainly deserves some sort of award for cleverness. When asking students to evaluate a series of lectures, they included a fictional lecturer and lecture in the middle of the list. 66% of students still gave the lecturer a rating (bear in mind that “n/a” was an option), with this falling to 49% when a photograph of the fictional lecturer was provided.

Let’s just think about that: very nearly half of students rated a tutor’s session despite it not existing, even with the visual reminder of the tutor’s face. Some students even left free text comments (positive and negative).

Part of me finds this whole concept utterly impossible – how could so many students just not know who had taught them? Why would they make up feedback? And part of me knows how easy that would be to do: medical school teaching is a carousel of tutors and sessions; often you never see that face again, and some sessions are unmemorable.

Why does this matter? Well, for one thing we place a lot of reliance on student feedback. To quote the paper “A thoughtful and conscientious approach to [teaching evaluations] by medical students is critical, as their evaluations may impact both the careers of faculty members and the quality of instruction.” And yet the practice is evidently flawed. Similarly, it can be difficult to convince colleagues to change their practice if the feedback received is untrustworthy. In a letter on “feedback fatigue”, prompted by a similar type of event, Fysh tells us that in Wales fire alarms in public buildings have been replaced with spinklers as “no one pays attention to fire alarms any more” – when did a warning signal is dismissed as noise, it’s time to change.

In the United Kingdom students are required to participate in the evaluation of the teaching they recieve as part of Good Medical Practice. I had been thinking that it was unprofessional not to leave feedback after teaching. Now I’m wondering whether the reverse is worse?

So here are my questions for discussion:

Feedback collected immediately after a teaching session obviously does not suffer from the “phantom professor” problem. But is the feedback provided then more valuable? Less?

Does feedback or evaluation of teaching need reflection time built in? Would the “mindful evaluation” improve with discussion? Uijtdehaage and O’Neal propose “prospective evaluation” through designated students – would that help or just add to the burden for some students?

If the lack of impact of that evaluation, as aulluded to by Uijtdehaage and O’Neal, means students don’t take it seriously, how do we fix that?

If this is the death knell of the Likert scale, what should we use instead?



Fysh, T. Feedback Fatigue The Clinical Teacher Volume 8, Issue 4, pg 288

Dishman, L. Retailers: Your Surveys Are Making Customers Suffer Accessed 16th July 2015

General Medical Council. Medical students: professional values and fitness to practise Accessed 16th July 2015

Invitation to contribute to the 50th Anniversary special issue of Medical Education #MedEdJ

Deadline 30 September 2015

To celebrate the 50th anniversary of the publication of Medical Education we are pleased to announce a special issue aimed at demonstrating the liveliness of our field. Part of the inspiration for this special edition came from a desire to discount the claims made in a recent article that described academics as:

   ‘Serious people with no social lives’

This is certainly not true of medical educationalists, as there is a rich history of lightheartedness and good humour in our realm, as evidenced by a paper published in Medical Education in 1999 entitled, The Objective Santa Christmas Examination (OSCE). The paper was conceived during a convivial evening at a meeting of the Association for the Study of Medical Education (ASME).    That papers were being planned during such an evening might, come to think of it, be used to support the above quote.  Nonetheless, we are of the firm belief that many interesting ideas arise in those contexts, but never see the light of day.  Sometimes that is for the better as sober second thought settles in, but sometimes it is simply because the participants in the discussion do not pursue the idea due to lack of clear opportunities for dissemination.  Just as good comedians cast a light on society that has the potential to reveal new insight, we think there is much to be gained from taking a more whimsical look at health professional education and providing a voice to the playful demons within the academics amongst us.

To that end, we would like to invite you to submit the papers you’ve always wanted to write.   More specifically, we’re looking for papers on quirky topics of relevance to the lives and practices of health professional educators and trainees.  Novel empirical work is welcomed, but so are well‐ crafted essays that place a new and generative spin on well known topics/findings.  Priority will be given to papers that amuse and entertain. Articles should be submitted by clicking ‘write’ at, should be no more than 1,500 words, and must be received by the 30th of September 2015.  Queries can be sent to Trudie Roberts (Guest Editor) or Kevin Eva (Editor‐in‐chief) by emailing

Reference:  1 Wood D, Roberts T, Bradley P, Lloyd D, O’Neil P. ‘Hello, my name is Gabriel, I am the house officer, may I examine you?’ or The Objective Santa Christmas Examination (OSCE). Medical Education 1999;33:915‐919.

The Purple Sticker Pilot Project

Hi all,

My name is Andrew Tabner and I’m this year’s e-Editor Intern for Medical Education and The Clinical Teacher. I’m delighted to be working with Dr Joshua Jacobs, the e-Editor, and the rest of the team at the journals; you’ll be hearing more from me over the coming months.

I’d like to take this opportunity to fill you in on an exciting new initiative we’re running – the “Purple Sticker Pilot Project.”

ASME (one of the journals’ “parent” organisations) recently held its annual scientific meeting in Edinburgh, UK; this focused on innovations and developments on the leading edge of medical education.

Session chairs at the conference were asked to make note of “hot topics” – these were papers or sessions that generated significant discussion or debate, ran out of time for questions, presented a particularly interesting new development etc. etc. They identified these sessions with a purple sticker in their programmes to highlight them to the organisers for future consideration.

Twenty-seven sessions earned a purple sticker and a few of these will be highlighted for continued discussion in this online forum. This discussion will hopefully include the speakers/authors of the session, and perhaps the session chairs, and will make reference to previously published supporting literature from the two journals. We’re currently working with the journals’ Editors in Chief to identify relevant articles and there will be future guest blog posts about the issues raised.

So watch this space and keep an eye on the journals’ Twitter hashtags (#MedEdJ and #ClinTeach) and accounts (@ClinicalTeacher and @asmeofficial) for further updates. We’d love you to join us to continue some of the fantastic discussions that took place at the ASME meeting, whether you were present for the initial meeting or not!



Conversation Kickstarter #MedEdJ: Measuring Cognitive Load

This guest blog post was contributed by e-Council member Robert R. Cooney, MD, MSMedEd, RDMS, FAAEM, FACEP.

How do humans learn?  This question has prompted the exploration of multiple theories within the psychological and educational literature.  Cognitive load theory (CLT) is one such framework.  CLT posits that learning (and performance) becomes impaired when the learner exceeds his/her working memory capacity.  CLT also identifies three types of “load” that may arise during training: intrinsic (task complexity), extraneous (instructional design), and germane (cognitive resource use).

In this month’s issue of Medical Education, Haji et al set out to apply CLT to simulation, specifically trying to determine if the use of standardized scales (subjective rating of mental effort and simple reaction time) are sensitive measures of intrinsic cognitive load when novice learners are faced with a simple versus complex surgical knot tying exercise. The link to the article click: Measuring cognitive load: performance, mental effort and simulation task complexity.

Reading through their design, I am impressed by the complexity of measuring cognitive load.  The authors explain that, “the theory has recently come under intense scrutiny because many existing measures of CL cannot differentiate between the three types of load.”  From the standpoint of this study, the investigators attempted to hold extraneous and germane load constant, thus measuring only intrinsic load.

While this can be done from a research perspective, I don’t believe that it is as easily controllable in the “messiness” that is the medical classroom.  What this investigation does offer is an approach to considering how CL can be moderated in a simulated setting.

My questions for you:

Have you incorporated CL into instructional design?

If so, how have you modified instructional delivery to decrease extraneous load?

What about intrinsic load?

Please share with us your thoughts!

#ClinTeach Conversation Kickstarter | Standard setting in the age of CBME

This month’s Clinical Teacher’s Toolbox is by Dr. Richard Hays of Tasmania, Australia. Dr. Hays’ summary of standard setting procedures is most definitely a worthy addition to most graduate studies courses on assessment, and yet, as an educator whom is working on competency-based medical education (CBME) projects, this article made me think:

What and how are we setting standards in the age of CBME?

As the world moves towards outcome-based medical education, this is a critical question to consider. Traditional psychometric methods (including those mentioned in Dr. Hays’ paper: Ebel, Angoff, Hofstee) require population-level data to ‘set standards’. And in the K-12, university-level and possibly undergraduate medical education realms where we have traditionally used these techniques for setting standards, these have been helpful in determining passing grades.

That said, these methods primarily are based on some key assumptions – most specifically on the idea that a post-hoc analysis of data on a given population (or a aggregate analysis of historical data of similar populations) might be used to set a standard. And perhaps, if there is no clear outcome, this the best way to do this… But in the age of outcomes, where do these standard setting procedures now belong?

The achievement of the outcome IS the standard, no?

Please share with me your thoughts below! I have pondered this question quite extensively and I’m not sure what the answer is…

Conversation Kickstarter #MedEdJ: Can spaced testing help students manage their study time and improve their marks?

Contributed by e-Council member Karen Scott, PhD

Can spaced testing help students manage their study time and improve their marks?

Kerdijk and colleagues have compared the effects of cumulative and end-of-term assessment on medical students’ ability to manage their self-study time, as well as their assessment results.1 The authors found using spaced testing throughout the course helped students devote regular time to self-study, avoid procrastination and increase their overall study time. In contrast, students preparing for end-of-term assessment tended to cram in the final two weeks of the course.

When comparing student marks between the two assessment formats, the only difference was found in marks for content studied in the final two weeks of the course, with higher marks gained by students sitting the spaced tests. The authors speculated that these students had more cognitive resources available for learning in the final two weeks because they were not cramming for the final exam.

What do you think are the pros and cons of cumulative assessment compared with end-of-term assessment?

Do you think long-term retention of knowledge could be improved through cumulative rather than end-of-term assessment?

In what other ways could teachers help students manage their study time throughout a course?


  1. Kerdijk W, Cohen-Schotanus J, Mulder FBF, Muntinghe FLH and Tio RA. Cumulative versus end-of-course assessment: effects on self-study time and test performance. Medical Education 2015: 49: 709–716 doi:1111/medu.12756

Link to article click here

Link to related blog post click here


Presence with purpose: attitudes of people with developmental disability towards health care students

By Ginette Moores, Natalie Lidster, Kerry Boyd, Tom Archer, Nick Kates and Karl Stobbe

Click here to link to article.


Early clinical encounters help medical and nursing students build professional competencies. However, there is a necessary emphasis on patient autonomy and appropriate consent. Although most individuals do not object to student involvement in clinical encounters, there are occasions when personal preference and health care education conflict. Many studies have evaluated patient attitudes towards students across a variety of specialties.


The purpose of this study was to identify the attitudes, comfort level and preferences of individuals with developmental disability (DD) towards the presence and involvement of medical and nursing students during clinical encounters.


Adults with DD across the Hamilton–Niagara region were invited to participate. Focus groups were moderated by two students with a health care facilitator and physician-educator. Participants were provided with focus group questions in advance and encouraged to bring communication aids or care providers. Data were analysed for emerging themes by two independent reviewers, who then compared results.


Twenty-two individuals participated. A wide range of opinions were expressed. Some participants were positively disposed towards students and perceived better care and improved communication with the health care team. Others were indifferent to students in a clinical setting. The final group was opposed to the presence of health care students, expressing confusion over their role and purpose, uneasiness with deviation from the norm, and concerns about confidentiality. Informative introductions with confidentiality statements and the presence of a supervising clinician were seen as helpful.


People with DD are affected by above-average health care needs. Their input into health care planning has been limited. Their opinions on health care learners varied considerably. Themes relating to attitudes, comfort and preferences about student involvement provide impetus for health care training practices that promote person-centred approaches and improvements to the quality of care received by people with DD.

DOI: 10.1111/medu.12751