Evidence-Based Teaching Principle 3: Modality Principle

The following is a slide I might use to begin teaching about p-values, type I and type 2 errors. What do you think about it? Will students learn deeply from it? (Would like to see a larger version of the slide? Please click on it)

Version 1

Version 1

Or do you think students would learn more deeply from this slide? The words at the bottom of the slide would be spoken by the instructor while the graphic is displayed.

Version 2

Version 2

Research would predict version 2 is better and will lead to deeper understanding. But why? What is different about them?

Version 1 violates the modality principle which states that people learn more deeply from multimedia lessons when words explaining concurrent graphics are presented as speech rather than as on-screen text. In version 1, the visual channel would have to simultaneously process the graphic and the printed text. This would likely overload this channel. In contrast, in version 2 the education message is split across separate cognitive channels- the graphic in the visual channel and words in the auditory channel.

Some caveats or limitations of this principle:

  1. It’s more important for novice learners
  2. It’s more important if the material is complex and presented at a rapid pace in a lecture. If the learner can control the pace of the material the modality principle is less important.
  3. Doesn’t apply if only printed words are presented on the screen (without any corresponding graphic)
  4. There are times when words should be presented on screen
    • words are technical
    • words are not in the learner’s native language
    • words are needed for future reference (e.g. directions to a practice exercise)

What’s the evidence for this? The modality principle is supported by more research than any other multimedia principle. Mayer identified 21 studies published through 2004 and found an average effect size on transfer tests of 0.97 (effect sizes > 0.8 are significant, 0.5 are moderate).

What Does Statistically Significant Mean?

Hilda Bastian writes an important and well written blog on this topic in a recent Scientific American blog .

I don’t think I have much else to add other than read this blog. There are some great links inside her blog to further understand this topic.

I think we are too focused on p <0.05. What if the p value is 0.051? Does that mean we should ignore the finding? Is it really any different than p value of 0.0499?

statistically significant

Confidence intervals give information on both statistical significance and clinical significance but I worry about how they are interpreted also. (Disclaimer: the interpretation and use of the confidence interval that follows is not statistically correct but is how we use them clinically.) Lets say a treatment improves a bad outcome with a relative risk (RR) of 0.94 with 95% CI of 0.66-1.12. So the treatment isn’t “statistically significant” (the CI includes 1.0) but there is potential for a relatively significant clinical benefit [ the lower bound of the CI suggests a potential 34% reduction in the bad outcome (1- RR = relative risk reduction so 1-0.66 = 0.34 or 34%)]. There is also potential for a clinically significant increase in risk of 12%. So which is more important? Somewhat depends on whether you believe in this treatment or not. If you believe in it you focus on the potential 34% reduction in outcomes. If you don’t believe in the treatment you focus on the 12% increased risk. So that’s the problem with confidence intervals but they give much more information than p-values do.