# Journal Club- Basic Stats: Answers

1) The authors designed the study to have a “power of more than 80%“. What does this mean?
Power is the probability of the study finding a difference given that one truly exists. So this study was designed with at least an 80% chance of finding a difference between treatment and control groups (given that one truly exists). This video explains power in a little more depth.
2) What was the planned type 1 error rate in this study? Type 1 error is also called the alpha error. They planned on a 5% type 1 error rate. This video explains type 1 error in a little more detail
3) What is a type 2 error and how is it related to power? Type 2 error is also called beta error. It is related to power in that power is 1 (or 100%) minus the beta error. So if power is 80% the type 2 error rate is 20%. This video  explains type 2 error in more detail.
4) What are the determinants of sample size in this study? How does varying the estimates of these components affect sample size? Sample size is determined by a variety of factors: power, type 1 and 2 error rates, estimated difference between study groups and variability in the data (though this last one has less of an effect). See this video explaining these factors and their effect on sample size.
5) The authors use a variety of statistical tests (chi-square, Fisher’s exact, t-tests, etc) to analyze the data. In general, what do statistical tests do?
Statistical tests look at the data and calculate a test statistic (e.g. t statistic for a t test). The test statistic is then used to determine the p-value assosicated with the data.

Review Table 2 and answer the following questions:1) The primary outcome occurred in 1.92/100 person-yrs in the control group compared to 1.83/100 person-yrs in the intervention group. The p-value associated with this comparison is 0.51. What does this p-value mean? Can p-values be used to detect bias in the study? The simple interpretation is that the difference is not statistically significant because the p-value is > 0.05. Another interpretation would be that the difference seen between the groups or one more extreme is due 51% likely due to chance. P-values cannont detect bias (systematic errors) in a study. Critical appraisal detects bias.
2) The hazard ratio comparing the intervention group to the control group for the primary outcome is 0.95 with a 95% confidence interval of 0.83-1.09. What does this confidence interval tell you about the effect? Can confidence intervals be used to detect bias in the study? It tells you a couple of things: 1. that the difference is not statistically significant as the CI included the point of no difference…1.0 and 2. that the benefit could be up to 17% reduction in cardiovascular events or 9% increase. This video explains how to interpret hazard ratios and this video confidence intervals.

Finally the extra credit: These 4 things can explain study findings: truth, chance, bias, confounding

I hope this was somewhat helpful. I will have another journal club next month on another EBM topic.