Useful diagram to teach basic EBM concepts

Dr. La Rochelle published an article in BMJ EBM this month with a very useful figure in it (see below). It is useful because it can help our learners (and ourselves) remember the relationship between the type of evidence and its believability/trustworthiness.


Lets work through this figure. The upright triangle should be familiar to EBM aficionados as it is the typical hierarchy triangle of study designs, with lower quality evidence at the bottom and highest quality at the top (assuming, of course, that the studies were conducted properly). The “Risk of Bias” arrow next to this upright triangle reflects the quality statement I just made.  Case reports and case series, because they have no comparator group and aren’t systematically selected are at very high risk of bias. A large RCT or systematic review of RCTs is at the lowest risk of bias.

The inverted triangle on the left reflects possible study effects, with the width of the corresponding area of the triangle (as well as the “Frequency of Potential Clinically relevant observable effect arrow) representing the prevalence of that effect. Thus, very dramatic, treatment altering effects are rare (bottom of triangle, very narrow). Conversely, small effects are fairly common (top of triangle, widest part).

One way to use this diagram in teaching is to consider the study design you would choose (or look for) based on the anticipated magnitude of effect. Thus, if you are trying to detect a small effect you will need a large study that is methodologically sound. Remember bias is a systematic error in a study that makes the findings of the study depart from the truth. Small effects seen in studies lower down the upright pyramid are potentially biased (ie not true). If you anticipate very large effects then observational studies or small RCTs might be just fine.

An alternative way to use this diagram with learners is to temper the findings of a study. If a small effect is seen in a small, lower quality study they should be taught to question that finding as likely departing from the truth. Don’t change clinical practice based on it, but await another study. A very large effect, even in a lower quality study, is likely true but maybe not as dramatic as it seems (ie reduce the effect by 20-30%).

I applaud Dr. La Rochelle for developing a figure which explains these relationships so well.

Answering Clinical Questions at the Point of Care- Its Time to Stop Making Excuses!

Del Fiol and colleagues published a systematic review of studies examining the questions raised and answered by clinicians in the context of patient care.  The studies they examined used several methodologies including after-visit interviews, clinician self-report, direct observation, analysis of questions submitted to an information service, and analysis of information resource search logs. Each of these methodologies has their pros and cons. I’ll review their findings following the 4 questions that they asked.

How often do clinicians raise clinical questions? On average, clinicians ask 1 question for every 2 patients (range 0.16-1.85).

How often do clinicians pursue questions they raise? On average, they only pursued 47% (range 22-71%).

How often do clinicians succeed at answering the questions they pursue? They were pretty successful when they decided to pursue an answer: 80% of the time they were able to answer the question. Interestingly, clinicians spent less than 2-3 minutes seeking an answer to a specific question.  They were clearly choosing questions that could be answered fairly quickly when they decided to pursue the answer to a  question.

What types of questions were asked? Overall, 34% of questions were related to drug treatment while 24% were related to the potential causes of a symptoms, physical finding, or diagnostic test finding.

I find 3 other findings (from the Box in the manuscript) interesting:

  • Most questions were pursued with the patient still in the practice (not sure if the clinicians searched in front of the patient or left the room- more about this later)
  • Most questions, as you would expect, are highly patient-specific and nongeneralizable.  This is unfortunate for long-term learning.
  • Also unfortunately, clinicians mainly used paper and human resources (more on this in a minute)


Even though Del Fiol examined barriers to answering questions I refer to another study by Cook and colleagues that more closely examined barriers to answering clinical questions at the point of care. Cook did focus groups with a sample of 50 primary care and subspecialist internal medicine and family medicine physicians to understand barriers and factors that influence point of care learning/question answering. Of course the main barrier is time. This study was done in late 2011 into early 2012 and included a wide range of ages of participants. With the resources available on both the desktop and handheld devices this barrier should be declining, especially when you consider the most common question clinicians ask is about drug treatment.

Physicians frequently noted patient complexity as a barrier. Complex patients require more time and often lead to more complex questions that are harder to answer with many resources. Almost all guidelines and studies are focused on single disease patients. Multimorbidity is rarely covered. Thus many answers will likely rely on clinical expertise and judgment. This is where using human resources is likely to occur. I bet few of us question how up to date our colleagues are that we ask questions of.

Interestingly, Cook’s study participants identified the sheer volume of information as a barrier. As a result, these physicians used textbooks more than electronic resources. I wonder if they understand that a print textbook is at least a year out of date by the time it hits market. How often do they update their textbooks? (likely rarely…just look at a private practice doctor’s bookshelf and you will often see books that are at least 2 or more editions out of date).

Finally, the physicians in Cook’s study felt that searching for information in front of the patient might “damage the patient-physician relationship or make patients uncomfortable.” They couldn’t be more wrong. Patients actually like when we look things up in front of them.  I always do this and I tell them what I am looking up and admit my knowledge limitation. I show them what I found so they can participate in decision making. No one can know everything and patients understand that. I would be wary of a physician who doesn’t look something up.

So, how should a busy clinician go about answering clinical questions?

  1. You must have access to trustworthy resources.  2 main resources should suffice: a drug resource (like Epocrates or Micromedex- both are free and available for smartphones and desk tops) and what Haynes  labels as a “Summary” (Dynamed or UpToDate).  I leave guidelines out here (even though they are classified as a “Summary” resource) because most guidelines are too narrowly focused and many are not explicit enough in their biases.
  2. Answer the most important questions (most important to the patient #1 and then most important to improving your knowledge #2). If the above resources can’t answer your question and you must consult a colleague challenge them to support their opinion with data. You will learn something and likely they will too.
  3. Answer the questions you can in the available time. Many questions should be able to be answered in 5 min or less using the above resources. You are more likely to search for an answer to a question while the patient is in your office than waiting until the end of the day (the above cited studies can attest to that).
  4. Be creative in answering questions. I saw a great video by Paul Glasziou (sorry can’t remember which of his videos it was to link) where he discussed a practice-based journal club. Your partners likely are developing similar questions as your are. This is how he recommends organizing each journal club session: step 1 (10 min) discuss new questions or topics to research, step 2 (40 min) read and appraise (if needed) a research paper for last week’s problem, and step 3 (10 min) develop an action plan to implement what you just learned. This is doable and makes your practice evidence-based and feel somewhat academic. If you follow the Haynes hierarchy and pick the right types of journal articles (synopses and summaries) you can skip the appraisal part and just use the evidence directly.

Ultimately you have to develop a culture of answering questions in your practice. It has to be something you truly value or you won’t do it.  Resources are available to answer questions at the point of care in a timely fashion. At some point we have to stop making excuses for not answering clinical questions.

What’s The Evidence For That? A new series I am starting

One thing I have noticed is that current residents don’t seem to know the evidence supporting a lot of the treatments they use, especially if the studies were done before they started med school.  I also don’t think they really want to read articles from the past when there is so many new things that excite them more. So I came up with a new series of data summaries I am going to make for teaching purposes during rounds to remind the residents and students that there is evidence behind some of what we do.  I designed them to be one pagers and answer what I feel are the key questions on the particular topic I am covering. I also try to follow the Hayne’s 6 S Hierarchy and focus on evidence higher up the pyramid (that isn’t UpToDate or Dynamed). I want to hit the less sexy topics that we encounter a lot on the inpatient medicine service like COPD exacerbation, hepatic encephalopathy, etc.

So here’s the first one I made: What’s The Evidence For That: Steroids for COPD Exacerbation. (Steroids for AECOPD) This took about 1.5-2 hours to make… mostly because I had to figure out how to make the template in Publisher do what I wanted it to do (and it fought me all the way).

Feel free to copy it and use it in your clinical teaching. Let me know if it is useful and how it could be made better. If you make any share them with me.

As I make more of these I will publish them here. I also plan to make Touchcasts of them and will post that here when I do.


Journal Club- The UAB Experience

Just about every internal medicine residency program has a journal club.  One could argue about the evidence behind this activity but it seems to serve its purpose if nothing else than to make housestaff read some journal articles (and not just UpToDate!). I think it does serve a purpose of encouraging critical appraisal/thinking about research publications. Doctors will always have to read new research studies. It takes time for studies to be incorporated into secondary publications like Dynamed and UpToDate. Furthermore, not everything makes it into these evidence-based resources.  Also research (published in every journal) is full of biases that lead to departure of the findings from the truth. Critical appraisal is the only way to detect them.

journal club

This is not our flier but one I found on the internet that I thought was interesting

Since 1999 or so I have been intimately involved in the journal club at UAB. At times I have run it completely but now I serve more as a guide and EBM expert for one of the chief residents who puts it all together. I think it has gotten greater buy-in from the housestaff coming from the CMR instead of me.

So I thought I would cover some of what we have done at UAB. Not that we are the world’s beacon for journal club but we have tried alot of stuff over the years. Some of it failed….some of it successful.

Time of day: we have done everything from 8am, noon, to at night at a faculty member’s house. What has gotten the best turnout is 8am before their day gets started.

Article Selection: This has been a debatable topic since day 1. We have done several things:
1) Latest articles in major journals
2) Rotating subspecialty articles (one month cardiology, one month GI, etc)
3) Article chosen by resident based on problems they saw during patient care
4) Article chosen by me to prove an EBM principle
5) Now we seem to be focusing on articles written by UAB faculty so that they can come as an expert guest.
6) We are considering using classics in medicine articles that are the foundation of what we do (eg first article on ACE inhibitors in CHF) because current residents are unlikely to ever read these articles.

Format: We seem to vary this almost yearly:
1) Faculty reviews article and asks questions of the housestaff about what various things mean
2) Teams of residents argue for or against using a drug, etc against another team of residents
3) Each individual reads the article and comes to JC not knowing what they could potentially be asked
4) A handout with Users Guides questions and a few other questions on design or applying the information is given out ahead of time but is only discussed by those willing to answer
5) Same handout given but with individual residents assigned specific questions to answer (this was the first time we could show that the residents actually read the paper ahead of time)
6) Groups of residents work on questions outside of JC on their own time (usually 3rd yr resident assigned to coordinate the group meeting) with the expectation to teach the other groups at JC. (this worked pretty good actually)
7) Last year we went to a flipped learning format where I put alot of material on that the residents were to do ahead of time (if they needed to) with assigned questions to be answered by individual residents. They felt like this was too much work to go thru all the material online.
8) This year we are to perhaps our most successful format (from resident satisfaction standpoint) where a handout of questions is answered in JC as a group project. A faculty expert gives a very short didactic talk at 2 points during JC on a very specific EBM topic related to the article (eg what is a likelihood ratio). The only expectation is that the article is read prior to JC. We still use somewhat of a flipped format where I reference a short video or 2 to watch about topics in the chosen article but its much less time intensive than last year.

I think overall what has been successful for us is when JC has the following elements:
1) Group work. Engaged learning is always desirable.
2) Clinical and EBM faculty expert present. Seems to give the article a little more value.
3) Case-based. We always solve a real world problem. I always tell the CMR making up JC to make sure the residents walk away with something they can use clinically.
4) Flipped light– giving the residents some information, but not too much, that they can review about EBM principles leads to many of them actually watching the videos or reading background papers. They come much more prepared and have a good basic knowledge that we can then build upon.

EBM Is In Jeopardy- Gamify A Lecture To Make It More Interesting

This week I did an EBM “lecture” based around the game show jeopardy. Now I know this isn’t anything new. Lots of teachers have used jeopardy format to teach. The point is that it took the content of “EBM Potpourri” (a group of topics that don’t fit well in other lectures that I give) and made it more interesting than a traditional 1 hour lecture (which is how I have given this lecture in the past).

The challenge when doing this to figure out your main teaching points and only include them since you don’t have a lot of extra space for less important topics (but shouldn’t we be doing this anyway?) The next challenge was to make gradually harder questions within each topic. I made some of the questions limited to certain learner levels only (I teach internal medicine residents that are organized into interns, 2nd years and 3rd years) to make sure every one participated independently at least somewhat. The residents only got about 40% of the questions right….but that wasn’t the point. The point was to convey my teaching points and to engage the learners. They worked in their teams (each team consisted of an intern, 2nd yr and 3rd yr) to solve problems. The competition between teams for “great prizes” (certificate of appreciation for 3rd place team, Rice-a-Roni to the 2nd place team, and lunch with me for the winners) made them take it a little more seriously.

If you would like the original PowerPoint file to use in your teaching I’ll be happy to email it to you. Contact me at

What unique ways have you taught EBM topics?