Which journals contain the best content for a general internist?

I did an entirely unscientific survey today of the table of contents of the several journals from March 1 through May 31, 2015. I evaluated the titles of original research and systematic review articles for their usefulness to a general internist. I considered an article useful if it informed me of information that, as a general internist, I could use to take care of an outpatient or inpatient (non-ICU). I then did a simple proportion of useful studies divided by total studies published. Admittedly there are major limitations of my study but it was informative. Here’s what I found (from least to most useful):

  1. NEJM 20.4% useful
  2. Annals of Internal Medicine 25%
  3. Lancet 29.5%
  4. JAMA 32.4%
  5. BMJ 46.1%
  6. JAMA Internal Medicine 61.5%

I postulated that predigested/preappraised journals would be more useful and they were.

  1. BMJ EBM 52.2% (2 caveats here: I am an associate editor of BMJ EBM so I have that conflict, and BMJ EBM publishes pediatric, OB/GYN and surgical studies also. Thus, the % of articles for a general internist is reduced as the total number of articles published is fixed at 23 per issue)
  2. ACP Journal Club 65.7%

Does this surprise you one way or another? Did you expect an even lower percentage or a higher percentage for each journal?

This could be a useful technique to decide which journal(s) to subscribe to. Review the main ones in your area and decide which contains the best content and focus on those journals. Don’t rely on reputation of the journal alone.

Alternatively, get BMJ EBM or ACP Journal Club and read it cover to cover. You will be very up to date if you do that. Only clinically important and methodologically sound articles make it into these publications. In addition, these journals include expert commentary about each article.

Answering Clinical Questions at the Point of Care- Its Time to Stop Making Excuses!

Del Fiol and colleagues published a systematic review of studies examining the questions raised and answered by clinicians in the context of patient care.  The studies they examined used several methodologies including after-visit interviews, clinician self-report, direct observation, analysis of questions submitted to an information service, and analysis of information resource search logs. Each of these methodologies has their pros and cons. I’ll review their findings following the 4 questions that they asked.

How often do clinicians raise clinical questions? On average, clinicians ask 1 question for every 2 patients (range 0.16-1.85).

How often do clinicians pursue questions they raise? On average, they only pursued 47% (range 22-71%).

How often do clinicians succeed at answering the questions they pursue? They were pretty successful when they decided to pursue an answer: 80% of the time they were able to answer the question. Interestingly, clinicians spent less than 2-3 minutes seeking an answer to a specific question.  They were clearly choosing questions that could be answered fairly quickly when they decided to pursue the answer to a  question.

What types of questions were asked? Overall, 34% of questions were related to drug treatment while 24% were related to the potential causes of a symptoms, physical finding, or diagnostic test finding.

I find 3 other findings (from the Box in the manuscript) interesting:

  • Most questions were pursued with the patient still in the practice (not sure if the clinicians searched in front of the patient or left the room- more about this later)
  • Most questions, as you would expect, are highly patient-specific and nongeneralizable.  This is unfortunate for long-term learning.
  • Also unfortunately, clinicians mainly used paper and human resources (more on this in a minute)

From ticklemeentertainment.com

Even though Del Fiol examined barriers to answering questions I refer to another study by Cook and colleagues that more closely examined barriers to answering clinical questions at the point of care. Cook did focus groups with a sample of 50 primary care and subspecialist internal medicine and family medicine physicians to understand barriers and factors that influence point of care learning/question answering. Of course the main barrier is time. This study was done in late 2011 into early 2012 and included a wide range of ages of participants. With the resources available on both the desktop and handheld devices this barrier should be declining, especially when you consider the most common question clinicians ask is about drug treatment.

Physicians frequently noted patient complexity as a barrier. Complex patients require more time and often lead to more complex questions that are harder to answer with many resources. Almost all guidelines and studies are focused on single disease patients. Multimorbidity is rarely covered. Thus many answers will likely rely on clinical expertise and judgment. This is where using human resources is likely to occur. I bet few of us question how up to date our colleagues are that we ask questions of.

Interestingly, Cook’s study participants identified the sheer volume of information as a barrier. As a result, these physicians used textbooks more than electronic resources. I wonder if they understand that a print textbook is at least a year out of date by the time it hits market. How often do they update their textbooks? (likely rarely…just look at a private practice doctor’s bookshelf and you will often see books that are at least 2 or more editions out of date).

Finally, the physicians in Cook’s study felt that searching for information in front of the patient might “damage the patient-physician relationship or make patients uncomfortable.” They couldn’t be more wrong. Patients actually like when we look things up in front of them.  I always do this and I tell them what I am looking up and admit my knowledge limitation. I show them what I found so they can participate in decision making. No one can know everything and patients understand that. I would be wary of a physician who doesn’t look something up.

So, how should a busy clinician go about answering clinical questions?

  1. You must have access to trustworthy resources.  2 main resources should suffice: a drug resource (like Epocrates or Micromedex- both are free and available for smartphones and desk tops) and what Haynes  labels as a “Summary” (Dynamed or UpToDate).  I leave guidelines out here (even though they are classified as a “Summary” resource) because most guidelines are too narrowly focused and many are not explicit enough in their biases.
  2. Answer the most important questions (most important to the patient #1 and then most important to improving your knowledge #2). If the above resources can’t answer your question and you must consult a colleague challenge them to support their opinion with data. You will learn something and likely they will too.
  3. Answer the questions you can in the available time. Many questions should be able to be answered in 5 min or less using the above resources. You are more likely to search for an answer to a question while the patient is in your office than waiting until the end of the day (the above cited studies can attest to that).
  4. Be creative in answering questions. I saw a great video by Paul Glasziou (sorry can’t remember which of his videos it was to link) where he discussed a practice-based journal club. Your partners likely are developing similar questions as your are. This is how he recommends organizing each journal club session: step 1 (10 min) discuss new questions or topics to research, step 2 (40 min) read and appraise (if needed) a research paper for last week’s problem, and step 3 (10 min) develop an action plan to implement what you just learned. This is doable and makes your practice evidence-based and feel somewhat academic. If you follow the Haynes hierarchy and pick the right types of journal articles (synopses and summaries) you can skip the appraisal part and just use the evidence directly.

Ultimately you have to develop a culture of answering questions in your practice. It has to be something you truly value or you won’t do it.  Resources are available to answer questions at the point of care in a timely fashion. At some point we have to stop making excuses for not answering clinical questions.

Conflicts of Interest in Online Point of Care Websites

Full disclosure: I was an Society of General Medicine UpToDate reviewer several years ago and received a free subscription to UpToDate for my services.  I use UpToDate regularly also (through my institution library).

Amber and colleagues published an unfortunate finding this week in the Journal of Medical Ethics. They found that UpToDate seems to have some issues with conflicts of interest by some of its authors and editors.

UpToDate makes this claim on its Editorial subpage: “UpToDate accepts no advertising or sponsorships, a policy that ensures the integrity of our content and keeps it free of commercial influences.” Amber and colleagues findings would likely dispute that the content is “free of commercial influences”.

Amber and colleagues reviewed the Dynamed and UpToDate websites for COI policies and disclosures. They only searched a limited number of conditions on each site (male sexual dysfunction, fibromyalgia, hypogonadism, psoriasis, rheumatoid arthritis, and Crohn’s disease) but their reasoning seems solid: treatments of these entities can be controversial (for the 1st 3) and primarily involve biologics (last 3).  It seems reasonable that expert opinion could dominate recommendations on male sexual dysfunction, fibromyalgia, hypogonadism and that those experts could be conflicted. (Editorial side note: Few doctors recommend stopping offending medications or offer vacuum erection devices instead of the “little blue pill”. Most patients don’t even realize there are other treatments for ED other than “the little blue pill” and its outdoor bathtub loving competitor!- but I digress). The biologics also make sense to me because this is an active area of research and experts writing these types of chapters could get monies from companies either for research or speaking.

What did they find? Both Dynamed and UpToDate mandate disclosure of COIs (a point I will discuss later). No Dynamed authors or editors reported any COIs while quite a few were found for UpToDate. Of the 31 different treatments mentioned for these 6 topic areas evaluated for 14 (45%) of them the authors, editors, or both received grant monies from the company making the therapies mentioned. Similarly 45%  of authors, editors, or both were consultants for companies making these therapies.  For 5 of the 31 therapies authors or editors were on the speakers bureaus for the companies making these therapies. What’s most worrisome is that both the authors and editors were conflicted for the psoriasis chapter. Thus there were no checks and balances for this topic at all!

From blackbeltbartending.com

While finding COIs is worrisome it doesn’t mean that the overall quality of these chapters was compromised nor that they were biased. We don’t know at this time what effect the COIs had. Further study is needed. Unfortunately, this is probably just  the tip of the iceberg. Many more chapters probably suffer from the same issues. Furthermore, traditional textbooks likely have the same problems.

Disclosing COIs is mostly useless. Disclosure doesn’t lessen their impact.  I don’t understand why nonconflicted authors can’t be found for these chapters.  Do we care so much about who writes a chapter that we potentially compromise ethics for name recognition? Those with COIs should not be allowed to write a chapter on topics for which they have a conflict. Period. If UpToDate is so intent on having them maybe they could serve as a consultant  to the authors or a peer reviewer but even that is stretching it. What really bothers me is that the editors for some of these chapters were also conflicted thus leaving no check on potential biases. As Amber and colleagues point out, even though these chapters underwent peer review what do we know about the peer reviewers? Did they have any COIs. Who knows.

So what should all you UpToDate users do? I suggest the following:

  1. Contact UpToDate demanding a change.  They have the lion’s share of the market and until they lose subscribers nothing will likely change.
  2. Check for COIs yourself before using the recommendations in any UpToDate chapter. You should be especially wary if the recommendation begin with “In our opinion….”.  (An all too often finding)
  3. Use Dynamed instead. It has a different layout format than UpToDate but is quicker to be updated and it seems less conflicted. And its cheaper!

Reading Journal Article Abstracts Isn’t As Bad As I Thought

Physicians mainly read the abstract of a journal article (JAMA 1999;281:1129). I must admit I am guilty of this also. Furthermore, I would bet that the most often read section of the entire article is the conclusions of the abstract. We are such a soundbite society.

Quick facts

I had always thought the literature showed how bad abstracts were…that they were often misleading compared to the body of the article. But I was wrong. A recent study  published in BMJ EBM found that 53.3% are abstracts had a discrepancy compared to information in the body of the article. That sounds bad doesn’t it? But only 1 of them was clinically significant. Thus most of the discrepancies were not important enough to potentially cause patient harm or alter a clinical decision.

This is good news as effectively practicing EBM requires information at the point of care. Doctors don’t have time to read an entire article at the point of care for every question they have but they do have time to read an abstract. It’s good to know that structured abstracts (at least from the major journals that were reviewed in this study) can be relied upon for information. I especially like reading abstracts in evidence based journals like BMJ EBM or ACP Journal Club as even their titles give the clinical information you need.

Why Aren’t All Journals Open Access?

Many will say this is a stupid question. Of course they can’t be open access because the journal needs to make money to exist. But think about this: what is the point of a published medical journal article?

Isn’t knowledge dissemination the main point of a journal article? So why do we not have access to all journal articles?

knowledge

How would journals continue to exist without subscriptions and advertising dollars? The model for most open access journals is that the authors pay for their article to be published. That is a viable option. Another option is that journals could develop special content that would still attract subscribers and not be open access. For example, JAMA has great content about the clinical examination and how to read research articles. Journals could develop apps for practice based application of material. Etc. Etc.

What about advertisers? Why would they continue to take out ads. Think about open access. Even more people would see the ads. As part of the “right” to download an article a little advertising banner is printed on the article somewhere. Thus, advertiser exposure is increased even more.

I don’t pretend to have all the answers but it seems that medical knowledge should be in the hands of those that need it, when they need it. The most important information gets published in the most restrictive journals. I think its time for the development of a creative way to fund open access to all journals.

What do you think? What are your solutions?

Why Can’t Guideline Developers Just Do Their Job Right????

I am reviewing a manuscript about the trustworthiness of guidelines for a prominent medical journal. I have written editorials on this topic in the past (http://jama.jamanetwork.com/article.aspx?articleid=183430 and http://archinte.jamanetwork.com/article.aspx?articleid=1384244). The authors of the paper I am reviewing reviewed the recommendations made by 3 separate medical societies on the use of a certain medication for patients with atrial fibrillation. The data on this drug can be summarized as follows: little benefit, much more harm. But as you would expect these specialists recommended its use in the same sentence as other safer and more proven therapies. They basically ignored the side effects and only focused on the minimal benefits.

Why do many guideline developers keep doing this? They just can’t seem to develop guidelines properly. Unfortunately their biased products have weight with insurers, the public, and the legal system. The reasons are complex but solvable. A main reason (in my opinion) is that they are stuck in their ways. Each society has its guideline machine and they churn them out the same way year after year. Why would they change? Who is holding them accountable? Certainly not journal editors. (As a side note: the journals that publish these guidelines are often owned by the same subspecialty societies that developed the guidelines. Hmmmm. No conflicts there.)

conflict of interest

The biggest problem though is conflicts of interest. There is intellectual COI. Monetary COI. Converting data to recommendations requires judgment and judgment involves values. Single specialty medical society guideline development panels involve the same types of doctors that have shared values. But I always wonder how much did the authors of these guidelines get from the drug companies? Are they so married to this drug that they don’t believe the data? Is it ignorance? Are they so intellectually dishonest that they only see benefits and can’t understand harm? I don’t think we will ever truly understand this process without having a proverbial fly on the wall present during guideline deliberations.

Until someone demands a better job of guideline development I still consider them opinion pieces or at best consensus statements. We need to quit placing so much weight on them in quality assessment especially when some guidelines, like these, recommend harmful treatment.

ACESSSS Search Engine is Amazing

Well to me anyway. I tell anyone who will listen that answering clinical questions needs to be approached systematically. Brian Haynes, MD developed the 6S approach to finding answers to clinical questions.  This diagram shows what the 6 S’s are and examples of each. The principle is to start at the top and work your way down until you answer your question. Resources toward the top are the most methodologically sound and most useful. Single studies at the bottom are the most prone to bias and most work to use. Previously you had to access each individual resource at each level trying to find answers  to your questons.

 

Well no more. I stumbled upon ACCESSSS Federated Search engine today and its incredible. (http://plus.mcmaster.ca/ACCESSSS/Default.aspx?Page=1). It searches the 6S hierarchy for you showing resources that can answser your question at each level. Its much better than the TRIP database. What’s more you can request that they add your library so that you can have full  access to resources your library subscribes to.

ACCESSSS is a service to help provide current best evidence for clinical decisions. It conducts literature searches simultaneously in several different evidence-based information services (online evidence-based texts, and pre-appraised journal publications). ACCESSSS also provides email alerts to newly published evidence in the user’s chosen area(s) of training/interest.

Searching ACCESSSS yields content that is hierarchically organized: Always look first at the content available at the highest level of the hierarchy, as it is most likely to be useful for clinical purposes.

The hierarchy is based on principles of evidence-based decision-making:

  • Systemsprovide patient-specific computerized decision support – “under construction” at present
  • Summariesprovide the best summarization of evidence for entire clinical topics (eg asthma, diabetes)
  • Synopsesare brief abstracts of high quality original studies and systematic reviews
  • Synthesesare systematic reviews of original studies
  • Studies are original investigations, such as randomized trials

I hope I can convince my library at UAB to request that they be added as a resource. It appears to be the holy grail of search engines. Thank you McMaster HiRU!