Recently I tried something new for our resident journal club. I am using Edmodo (wwww.edmodo.com) to post the readings and the protocol (boring) and to stimulate learning by posting “Homework Questions” that require learners to digest the background information and relay how they might use it (exciting). One resident had a great insight that many of us dont think about—- the limitations of our tools. She pointed out that a clinical prediction rule she uses (the CHADS2 score for afib) wasnt as good as she thought.
For some reason, as doctors, we assume tests are really good….prediction rules are accurate…..treatments always help. These tools are better than nothing (some of the time) but they arent perfect. This resident pointed that out in an insightful way. Bravo to her. I challenged our journal club group to look at things they use regularly and review their development. Know their limiations.
I see rules used improperly all the time (for example reporting a TIMI score for a patient with nonanginal chest pain) and when I call the resident or student out for using it improperly I usually get a blank stare. They dont realize the tool cant be used in that situation. They are using tools in ways they werent developed for. Who knows if they work or not in this application. We as educators, especially as EBM educators, need to challenge ourselves and our learners to get know (how they were developed and validated) the tools we rely on and know their limitations.