If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person with a positive test result actually has the disease? Assume the test is 100% sensitive.
Everyone taking care of patients, especially in primary care, needs to be able to figure this out. This is a basic understanding of what to do with a positive screening test result. If you can’t figure this out how would you be able to discuss the results with a patient? Or better yet how would you be able to counsel a patient on the implications of a positive test result prior to ordering a screening test?
Unfortunately, a study released online on April 21st found that 77% of respondents answered the question incorrectly. These results are similar to the results of a study in 1978, which used the same scenario. This is unfortunate as interpreting diagnostic test results is a cornerstone of EBM teaching and almost all (if not all) medical schools and residency programs teach EBM principles. So what’s the problem?
Here are some of my thoughts and observations:
- These principles are probably not actually being taught because the teachers themselves don’t understand them or if they do they don’t teach them in the proper context. This needs to be taught in the clinic when residents and medical students discuss ordering screening tests or on the wards when considering a stress test or cardiac catheterization, etc.
- The most common answer in the study was 95% (wrong answer). This shows that doctors don’t understand the influence of pretest probability (or prevalence) on post test probability (or predictive value). They assume a positive test equals disease. They assume a negative test equals no disease. Remember where you end up (posttest probability) depends on where you start from (pretest probability).
- I commonly see a simple lack of thinking when ordering tests. How many of you stop to think: What is the pretest probability? Based on that do I want to rule in or rule out disease? Based on that do I need a sensitive or specific test? What are the test properties of the test I plan to order? (or do I just order the same test all the time for the same diagnosis?)
- I also see tests ordered for presumably defensive purposes. Does everyone need a CT in the ER? Does everyone need a d-dimer for every little twinge of chest pain? When you ask why a test was ordered I usually hear something like this: “Well I needed to make sure something bad wasn’t going on”. I think this mindset transfers to the housestaff and students who perpetuate it. I commonly see the results of the ER CT in the HPI for God’s sake!!!
- Laziness. There’s an app for that. Even if you can’t remember the formula or how to set up a 2×2 table your smartphone and Google are your friends. Information management is an important skill.
So what’s the answer to the question above? 1.96% (Remember PPV = true pos / true pos + false pos so 1 / 1 + 50 = 1.96) If its easier set up a 2 x 2 table.
This very sensitive (100%) and fairly specific (95%) test (positive LR is 20!) wasn’t very informative when positive. Probability only went from 0.1% to 2%. The patient is still not likely to have disease even with a positive test. It would have been more useful if the test result was negative. Thus, in a low probability setting your goal is to rule out disease and you should choose the most sensitive test (Remember SnNout).
Reblogged this on medinfoshare and commented:
Brilliant post
Please explain sNout
A sensitive test if negative rules out disease