SPRINT Trial Misunderstood and Misapplied- Part 1 (Not Knowing Who’s in the Study)

The SPRINT Trial was an important trial for the evidence base in hypertension. Previous studies had shown that intensive BP lowering in patients with type 2 diabetes (<120 vs <140 mm Hg) and in patients who previously had a stroke (<130 vs <150 mm Hg) resulted in no significant benefit in major cardiovascular events (except for stroke in diabetics). The natural question arose about whether tight BP control in patients without diabetes or previous stroke mattered more than less intensive control? This became even more important as JNC-8 recommended less stringent goals than previous JNC guidelines.

Unfortunately I have seen physicians I work with and residents become overzealous in extending SPRINT results to other patient groups, especially those which it excluded. Interestingly, when I question them about SPRINT and who was actually studied they either assumed it was all patients with HTN (because they hadn’t actually read the inclusion/exclusion criteria at all) or knew who it was restricted to but assumed that higher risk patients with diabetes and stroke would equally gain benefit (or even more which seems intuitive).

So my 1st point to make is that you should actually read a study and know who was studied (and importantly who wasn’t) before you start using it.

This seems like an intuitive statement but many of my colleagues and trainees simply haven’t closely examined the study. They have heard abbreviated results in conferences or from faculty in clinic and assume that it applies broadly.

So who was in SPRINT? To be included a patient had to be at least 50 yrs of age, have a systolic BP of 130-180 mm Hg, and be at increased risk of cardiovascular disease (clinical or subclinical CVD, CKD with eGFR 20-59 ml/min, Framingham 10-yr risk >15%, or be over 75 yrs of age).  Patients with diabetes and prior stroke were excluded. Lets see what they looked like by checking out Table 1.

sprint table 1

These patients had pretty good baseline blood pressures and were already on almost 2 anti-hypertensive meds to start. They had fairly good lipid profiles and around 43% were on statins. The majority were nonsmokers and had 20% 10-yr Framingham risk. These patients are somewhat healthier than the patients I see.

Point 2: Compare patients in the study to who you see. Are they sicker or healthier? How would you adjust the results to fit your patients?

Don’t assume the study enrolled the average patient or that your patients will be just like those in the study.

In Part 2 I’ll analyze the intervention and outcome measures of the study.

 

Misconceptions about screening are common. Educate your patients.

An article published online today by JAMA Internal Medicine is very revealing about the misconceptions patients can have about screening, in this case lung cancer screening. This study was conducted at 7 VA sites launching a lung cancer screening program.  Participants underwent semi-structured qualitative interviews about health beliefs related to smoking and lung cancer screening. Participants had some interesting beliefs:

    • Nearly all participants mentioned the belief that everyone who is screened will benefit in some way
    • Many participants wanted to undergo screening to see “how much damage” they had done to their lungs
    • Rather than being alarmed by identification of a nodule or suspicious findings requiring monitoring with future imaging, several participants expressed the belief that identification of the nodule meant their cancer had been found so early that it was currently harmless

From https://upload.wikimedia.org/wikipedia/commons/3/3f/Thorax_CT_peripheres_Brronchialcarcinom_li_OF.jpg

Its important to educate our patients on what screening is and isn’t. They need to understand the role of screening. I like to ask patients what they expect to get out of screening. It can help you discover their misconceptions. They need to understand that they still need to change behaviors (in this case smoking) even if the screening test is negative. I think we all too often just order the screening test because a clinical reminder tells us to without thinking of how it could be interpreted by our patients.

Food for thought: What is the rate of false positive rate of CT scan for lung cancer screening?

Click here and read the results section of this abstract for the answer. Shocking isn’t it.

Treating Low T can be dangerous

I am bombarded with low t (low testosterone) commercials on the radio and television. There is a men’s health clinic in my city that will screen and treat men for this horrendous affliction. They guarantee greater sexual prowess and a happy marriage. What they don’t mention are the side effects that can be deadly.

An important study was published in JAMA in 2013 that showed increased cardiac risk in veterans who were prescribed testosterone. The caveat of this study is that all the patients in the study had undergone cardiac catheterization and thus were at higher risk for CAD than those who don’t undergo cardiac cath. As shown in the image below at any given point during follow-up those assigned to testosterone were at 29% greater risk of death, MI or stroke than those on no testosterone therapy. Adjusting for the presence of CAD had no effect on the estimate of outcomes. Thus, even those without CAD (by catheterization) were at increased risk of death, MI and stroke. Most patients in this study got patches or injections. Around 1% got the gel.

Survival curve for testosterone1
A new study has looked at differences in risk among different testosterone dosage forms. This was a huge retrospective cohort (544K patients): 37.4% injection, 6.9% patch, and 55.8% gel users. The outcomes of interest were myocardial infarction (MI), unstable angina, stroke, and composite acute event (MI, unstable angina, or stroke); venous thromboembolism (VTE); mortality; and all-cause hospitalization. They compared these outcomes between injection users and gel users and between patch users and gel users. They didn’t have a nonuser group but that wasn’t really needed as risk compared to nonusers has been established with the study I noted above. The results are shown in the 2 figures below.

Results 2Results 1

 

 

 

 

 

 

 

 

 

 

Using injectable testosterone was associated with increased risk of stroke, death, MI, and hospitalization compared to testosterone gel (left figure above). Testosterone patches only increased the risk of MI compared to testosterone gel (right figure above). You should look at the absolute rates in the tables in the paper as they are low and what I report above are relative rates which can be misleading.

The bottom line is that you should have a good reason to replace testosterone and not just because the patient’s T is low. You should consider the cardiovascular risk of this drug and counsel the patient on this risk (in addition to risk of prostate cancer and polycythemia). If you choose to replace T then the gel is the safest followed by patches.

How to calculate patient-specific estimates of benefit and harm from a RCT

One of the more challenging concepts for students is how to apply information from a study to an individual patient. Students have been taught how to calculate a number needed to treat (NNT) but that isn’t often very useful for the current patient they are seeing. Usually our patients are sicker or healthier than those in the study we are reading. Studies include a range of patients so the effect we see in the results is the average effect for all patients in the study.

Imagine you are seeing Mr. Fick, a 70 yo M with ischemic cardiomyopathy (EF 20%) and refractory anemia (baseline Hg 7-10 mg/dl). He reports stable CHF symptoms of dyspnea walking around the house after about 30 ft. He reports other signs and symptoms of CHF are stable. Medications include lisinopril 20mg bid, aspirin daily, furosemide 80 mg daily, and iron tablets daily. He is not taking a beta blocker due to bradycardia and can’t take a statin due to myopathy. He has refused an ICD in the past. BP is 95/62 mm Hg, pulse is 50 bpm, weight is stable at 200 lbs. Labs done one week earlier show a stable Na 0f 125 mmol/l, K 3.8 mmol/l, Hg 8 g/dl, platelets 162 k, WBC is normal with 22% lymphs on differential, cholesterol is 220 mg/dl, and uric acid is 6.2.  Since he has severe CHF you are considering adding spironolactone to his regimen. he is concerned because he has a hard time tolerating medications. He wants to know how much it will help him. What do you tell him?

This figure is from the RALES trial, a study of spironolactone in patients with advanced CHF. Use the figure below to figure out Mr. Fick’s individual estimated risk of death if he agrees to take spironolactone.

RALES figure

There are 4 methods I will demonstrate to calculate a patient-specific estimate of effect from an RCT. First, think about what information you will need to estimate Mr. Fick’s specific benefits of spironolactone. You will need the NNT from the RALES trial and Mr. Fick’s estimated risk of death (we call this the PEER or the patient expected event rate). Where do we get the PEER of death for Mr. Fick? You use a validated prediction rule. I use Calculate by QxMD. Look in the Cardiology folder under heart failure and open the Seattle Heart Failure Model. Plug in Mr. Fick’s data and you get his 1 year expected risk of death (56%).

Method 1: Calculate patient-specific NNT using PEER: the formula for this is 1 / (PEER x RRR) where RRR is the relative risk reduction from the RALES trial (30%. To calculate that: 1-RR is the RRR). So plugging that in, Mr. Fick’s NNT is 1 / (0.56 x 0.3) = 6 (the NNT from the RALES trial is 9).

Method 2: Estimate patient-specific NNT using f: F is what I call the fudge factor. It is your guesstimation of how much higher or lower Mr. Fick’s risk of death is than that of the average patient in the study. If you say he is 2 times more likely to die then f is 2. If you think he is half as likely then f is 0.5. The way to use f is to divide the study NNT by f. This gives an estimate of Mr. Fick’s NNT. So lets just say Mr. Fick is twice as likely to die than those in the study. The NNT of the study is 9.  So 9/2 is 4.5 which I would round up to 5.

NNTs are nice but its hard to use them directly with a patient. The next 2 calculations are more useful for patients.

Method 3: use the RR  to calculate Mr. Fick’s actual risk of death: the RR of death in the RALES trial is 0.70. You multiply this by his estimated death rate and you get his expected death risk if he were on spironolactone instead of nothing. His risk of death is 56%. So 0.70 x 0.56 = 39%. So if Mr. Fick takes spironolactone I expect his risk of death to go from 56% down to 39%. That’s useful information to tell the patient.

Method 4: use the RRR to calculate Mr. Fick’s actual risk of death: This is similar to the concept above except that you have to remember that the RRR (relative risk reduction) is relative. So first you calculate how much risk is reduced by the treatment. The RRR is 30% (1-RR is RRR). Then I multiply this by the patient’s risk of death. 0.30 x 0.56 is 0.168. This 16.8% represents how much risk I have removed from the baseline risk. Now I have to subtract it from the baseline risk and I get his final risk. So 0.56-0.168=0.39 or 39%. Same number as method 3 and it has to give the same number because its just a different way of calculating the exact same thing.

I hope this is useful and now you can give patients some real numbers instead of just saying your risk is decreased by x%.

Remember you need: patients risk of the event without treatment (usually from a prediction rule or maybe the placebo event rate of the study or placebo rate of a subgroup) and event rates from the study. Then you can make all the calculations from there.

The devil is in the details- overstating the results of the effects of corticosteroids in patients with pneumonia

This blog post will tie in nicely with what I blogged on earlier today about composite endpoints. Read that post first before reading this.

Today I received my e-table of contents from JAMA and read a study on the of Effect of Corticosteroids on Treatment Failure Among Hospitalized Patients With Severe Community-Acquired Pneumonia and High Inflammatory Response. The primary outcome of the study was “treatment failure (composite outcome of early treatment failure defined as [1] clinical deterioration indicated by development of shock, [2] need for invasive mechanical ventilation not present at baseline, or [3] death within 72 hours of treatment; or composite outcome of late treatment failure defined as [1] radiographic progression, [2] persistence of severe respiratory failure, [3] development of shock, [4] need for invasive mechanical ventilation not present at baseline, or [5] death between 72 hours and 120 hours after treatment initiation; or both early and late treatment failure).”

The authors make a bold statement:

The results demonstrated that the acute administration of methylprednisolone was associated with less treatment failure…”

I find this statement (from the 1st sentence in the discussion section) to be a vast overstatement of what they in fact found in this study.  Examine the table below (I trimmed out the per-protocol analysis results) and see just what was actually reduced by steroids.

From JAMA 2015;313(7):677-686

From JAMA 2015;313(7):677-686

Steroids had no effect on “early treatment failure”. They significantly reduced “late treatment failure” but this was all driven by one outcome. The only thing steroids did was reduce radiographic progression. They didn’t help any other outcomes of this large composite but yet the authors make this sweeping statement of steroids being associated with less treatment failure. This demonstrates the importance of looking at the individual components of the composite and not just focusing on the overall composite result.

It also demonstrates why I don’t like to read the discussion section of a paper nor the conclusions from an abstract- you will be misled. The reviewers and editors should have toned down these conclusions as they are a gross overstatement of what was actually found.

Publication Bias is Common in High Impact Journal Systematic Reviews

A very interesting study was published earlier this month in the Journal of Clinical Epidemiology assessing publication bias reporting in systematic reviews published in high impact factor journals.  Publication bias refers to the phenomenon that statistically significant positive results are more likely to be published than negative results. They also tend to be published more quickly and in more prominent journals. The issue of publication bias is an important one because the goal of a systematic review is to systematically search for and find all studies on a topic (both published and unpublished) so that an unbiased estimate of effect can be determined from including all studies (both positive and negative). If only positive studies, or a preponderance of positive studies, are published and only these are included in the review then a biased estimate of effect will result.

Onishi and Furukawa’s study is the first study to examine the frequency of significant publication bias in systematic reviews published in high impact factor general medical journals. They identified 116 systematic reviews published in the top 10 general medical journals in 2011 and 2012: NEJM, Lancet, JAMA, Annals of Internal Medicine, PLOS Medicine, BMJ, Archives of Internal Medicine, CMAJ, BMC Medicine, and Mayo Clinic Proceedings. They assessed each of the systematic reviews that did not report an assessment of publication bias for publication bias using Egger test of funnel plot asymmetry, contour-enhanced funnel plots, and tunnel effects. RESULTS: The included systematic reviews were of moderate quality as shown in the graph below. About a third of “systematic reviews” didn’t even perform a comprehensive literature search while 20% didn’t  assess study quality. Finally, 31% of systematic reviews didn’t assess for publication bias. How can you call your review a systematic review when you don’t perform a comprehensive literature search and you don’t determine if you missed studies?

Quality of included reviews

From J Clin Epi 2014;67:1320

Of the 36 reviews that did not report an assessment of publication bias, 7 (19.4%) had significant publication bias. Saying this another way, if a systematic review didn’t report an assessment of publication bias there was about a 20% chance publication bias was present. The authors then assessed what impact publication bias had on the estimated pooled results and found that the estimated pooled result was OVERESTIMATED by a median of 50.9% because of publication bias. This makes sense as mostly positive studies are published and negative studies aren’t. Thus, you would expect the estimates to be overly optimistic.

The figure below reports the results for individual journals. JAMA had significant publication bias in 50% of the reviews that didn’t assess publication bias while the Annals had 25% and BMJ 10%. It is concerning that these high impact journals publish “systematic reviews” that are of moderate quality and have a significant number of reviews that don’t report any assessment of publication bias.

Results by journal

From J Clin Epi 2014;67:1320

Bottom Line: Always critically appraise systematic reviews published in high impact journals. Don’t trust that an editor, even of a prestigious journal, did their job….they likely didn’t.

Should we stop treating mild (stage 1) hypertension?

I am preparing for a talk on the controversy surrounding JNC-8 and came across a post on KevinMD.com by an author of a Cochrane systematic review that aimed to quantify the effects of antihypertensive drug therapy on mortality and morbidity in adults with mild hypertension (systolic blood pressure (BP) 140-159 mmHg and/or diastolic BP 90-99 mmHg) and without cardiovascular disease. This is an important endeavor because the majority of people we consider treating for mild hypertension have no underlying cardiovascular disease.

David Cundiff, MD in his KevinMD.com post made this statement:

The JNC-8 authors simply ignored a systematic review that I co-authored in the Cochrane Database of Systematic Reviews that found no evidence supporting drug treatment for patients of any age with mild hypertension (SBP: 140-159 and/or DBP 90-99) and no previous cardiovascular disease, diabetes, or renal disease (i.e., low risk).

Let’s see if you agree with his assessment of the findings of his systematic review.

As is typical for a Cochrane review the methods are impeccable so we don’t need to critically appraise the review and can review the results. The following images are figures from the review. Examine them and then I will discuss my take on the results.

mortality

Mortality results

 

stroke

Stroke results

 

CHD

Coronary Heart Disease results

 

adverse effects

Adverse Effects

 

If you just look at the summary point estimates (black diamonds) you would conclude the treatment of mild hypertension in adults without cardiovascular disease has no effect on mortality, stroke and coronary heart disease but greatly increases withdrawal from the study due to adverse effects. But you are a smarter audience than this. The real crux is in the studies listed and examination of the confidence intervals.

Lets examine stroke closely. 3 studies were included that examined the treatment of mild hypertension on stroke outcomes. Two of the studies had no stroke outcomes at all. The majority of the data came from one study. The point estimate of effect was in fact a reduction of stroke by 49% but the confidence interval included 1.0 so not statistically significant. But the confidence interval ranged from 0.24-1.08- a potential 76% reduction in stroke up to an 8% increase. I would argue that a clinically important effect (stroke reduction) is very possible and had the studies been higher powered we would have seen a statistically significant reduction also. I think to suggest no effect on stroke is misleading. The same can be said for mortality.

Finally, what about withdrawals due to adverse effects. Only 1 study provided any data. It has an impressive risk ratio of 4.80 (almost 5 fold increased risk of stopping the drugs due to adverse effects). But the absolute risk increase is only 9% (NNH 11). We are not told what these adverse effects are to know if they were clinically worrisome or just nuisances for patients.

So, I don’t agree with Dr. Cundiff’s assessment that there is no evidence supporting treatment. I think the evidence is weak but there is no strong evidence to say we shouldn’t treat mild hypertension. The confidence intervals include clinically important benefits to patients. More studies are needed but will not be forthcoming. Observational data supports treating this group of patients and may have to be relied upon in making clinical recommendations.

PEITHO Trial Teaches an Important Lesson

The current issue of the New England Journal of Medicine contains an important trial- the PEITHO trial. Its important because it tells us what not to do.

In the PEITHO trial patients with intermediate risk pulmonary embolism (right ventricular dysfunction and myocardial injury with no hemodynamic compromise) were randomized to a single weight-based bolus of tenecteplase or placebo. All patients were given unfractionated heparin. Patients were followed for 30 days for the primary outcome of death from any cause or hemodynamic decompensation within 7 days after randomization.

This table shows the efficacy outcomes. Looks promising doesn’t it.

PEITHO efficacy outcomes

The primary outcome was significantly reduced by 56%. This composite outcome is not a good one though. Patients would not consider death and hemodynamic decompensation equal. Also the pathophysiology of the 2 outcomes can be quite different. The intervention should also have a similar effect on all components of a good composite and there is a greater effect on hemodynamic decompensation than death. Thus, don’t pay attention to the composite but look at the composite’s individual components. Only hemodynamic decompensation was significantly reduced (ARR 3.4%, NNT 30). Don’t get me wrong this is a good thing to reduce.

But with all good can come some bad. This trial teaches that we must pay attention to adverse effects. The table below shows the safety outcomes of the PEITHO trial. Is the benefit worth the risk?

PEITHO safety outcomes

You can see from the table that major extracranial bleeding was increased 5 fold (ARI 5.1%, NNH 20) as was stroke, with most of them being hemorrhagic (ARI 1.8%, NNH 55).

This trial teaches a few important EBM points (I will ignore the clinical points it makes):

  1. You must always weigh the risks and benefits of every intervention.
  2. Ignore relative measures of outcomes (in this case the odds ratios) and calculate the absolute effects followed by NNT and NNH. These are much easier to compare.
  3. Watch out for bad composite endpoints. Always look at individual components of a composite endpoint to see what was affected.

It’s a Sham It Doesn’t Work: Arthroscopic Meniscal Repair

The orthopedic surgeons won’t be happy about this at all. The most common orthopedic procedure is a sham….or should I say no better than a sham surgery. A study published in the New England Journal of Medicine on December 26th should change the current management of meniscal tear.

Sihvonen and colleagues randomized 70 Finnish patients without knee osteoarthritis to arthroscopic partial meniscectomy and 76 patients to a sham operation. These patients had failed at least 3 months of conventional conservative treatment. Patients with traumatic onset of symptoms or with osteoarthritis were excluded.  The authors did something very interesting which both allowed them to more easily perform a sham operation and to get around the ethics of a sham operation: they did a diagnostic arthroscopy and it was during this diagnostic arthroscopy that the patient was randomized. At this point either a standard meniscectomy was performed or an elaborate sham. Everything was done the same postoperatively as far as wound care, rehab instructions, etc. The patients, those who determined the outcomes, and  those who collected and analyzed the data were all blinded to study group assignment. The main outcome measure was knee pain after exercise (at 2, 6, and 12months) using a validated scale and a validated meniscus specific quality of life instrument. They also asked patients at 12 months if they would be operated on again and if they had figured out which arm of the trial they were in.

Prior to looking at the results of this study we need to make sure the study is scientifically valid. Therapeutic studies should meet the following criteria:

  1. Were participants randomized? YES
  2. Was random allocation method concealed? YES the authors used opaque envelopes
  3. Was intention to treat analysis used? YES
  4. Was the groups similar at the start of the study? YES
  5. Was blinding adequate? YES (only the operating room staff weren’t blinded but they didn’t participate in outcome determination)
  6. Were the groups treated equally apart from the intervention? YES
  7. Was follow-up sufficiently long and complete? YES

So I think this study is low risk for bias and I can now move to the results.

figure

This figure shows that the 2 treatment arms had the same effect on the validated measures and the knee pain likert scale. 83% of sham surgery patients reported improvement in knee pain compared to 88.6% of meniscectomy patients. Furthermore, 96 and 93% of patients respectively, reported they would repeat the same procedure they had undergone. 5% of sham surgery patients underwent additional arthroscopy compared to 1.4% of meniscectomy patients (p=ns).

While this study is small it was adequately powered for their outcomes. One could argue the ethics of a sham operation but this methodology is the only way to truly determine if some procedures “work”. Even procedures that don’t really work can have a significant placebo effect and a sham study is a powerful way to control for that effect.

It’s important to point out that these results only apply to patients with non-traumatic degenerative meniscal tears. Interestingly, the authors did a post-hoc subgroup analysis showing that patients with sudden onset of symptoms didn’t have any different outcomes with meniscectomy over sham surgery. There are major limitations with any post-hoc analysis but suggests that now a study needs to be done in a similar fashion in traumatic injury patients.

I Am Not Using The New Risk Predictor In The Recently Released Cholesterol Guidelines

Last week the hotly anticipated cholesterol treatment guidelines were released and are an improvement over the previous ATPIII guidelines. The new guidelines abandon LDL targets, focus on statins and not add-on therapies which don’t help, and emphasize stroke prevention in addition to heart disease prevention.

The problem with the new guidelines is that they developed a new risk prediction tool  which frankly stinks. And the developers knew it stunk but promoted it anyway!

Lets take a step back and discuss clinical prediction rules (CPR). CPRs are mathematical models that quantify the individual contributions of elements of the history, PE, and basic laboratory tests into a score that aids diagnosis or prognosis estimation. They can accommodate more factors than the human brain can take into account and they always give the same result whereas human judgment is inconsistent (especially in the less clinically experienced). To develop a CPR you 1) construct a list of potential predictors of the outcome of interest, 2)examine a group of patients for the presence of the candidate predictors and their status on the outcome of interest, 3) determine statistically which predictors are powerfully and significantly associated with the outcome, and 4) validate the rule [ideally involves application of rule prospectively in a new population (with different spectrum of disease) by a variety of clinicians in a variety of institutions].

Back to the new risk tool. They decided to develop a new tool because the Framingham Score (previously used in the ATPIII guidelines) was insufficient (developed on exclusively white population). How was it developed? The tool was developed using “community-based cohorts of adults, with adjudicated endpoints for CHD death, nonfatal myocardial infarction, and fatal or nonfatal stroke. Cohorts that included African-American or White participants with at least 12 years of follow-up were included. Data from other race/ethnic groups were insufficient, precluding their inclusion in the final analyses”. The data they used was from “several large, racially and geographically diverse, modern NHLBI-sponsored cohort studies, including the ARIC study, Cardiovascular Health Study, and the CARDIA study, combined with applicable data from the Framingham Original and Offspring Study cohorts”. I think these were reasonable derivation cohorts to use. How did they validate the tool? Importantly they must use external testing because most models work in the cohort from which it was derived. They used “external cohorts consisting of Whites and African Americans from the Multi-Ethnic Study of Atherosclerosis (MESA) and the REasons for Geographic And Racial Differences in Stroke study (REGARDS). The MESA and REGARDS studies were approached for external validation due to their large size, contemporary nature, and comparability of end points. Both studies have less than 10 years of follow up. Validation using “most contemporary cohort” data also was conducted using ARIC visit 4, Framingham original cohort (cycle 22 or 23), and Framingham offspring cohort (cycles 5 or 6) data”. The results of their validity testing showed C statistics ranging from a low of 0.5564 (African -American men) to a high of 0.8182 (African-American women). The C statistic is a measure of discrimination (differentiating those with the outcome of interest from those without the outcome) and ranges from 0.5 (no discrimination- essentially as good as a coin flip) to 1.0 (perfect discrimination). The authors also found that it overpredicted events. See graph below.

graph of over prediction

So why don’t I want to use the new prediction tool? 3 main reasons:
1) It clearly over predicts outcomes. This would lead to more people being prescribed statins than likely need to be on statins (if you only use the tool to make this decision). One could argue that’s a good thing as statins are fairly low risk and lots of people die from heart disease so overtreating might be the way to err.
2) No study of statins used any prediction rules to enroll patients. They were enrolled based on LDL levels or comorbid diseases. Thus I don’t even need the rule to decide on whether or not to initiate a statin.
3) Its discrimination is not good….see the C-statistic results. For Black men its no better than a coin flip.