# 8.8: Effect Size

- Page ID
- 14497

When we reject the null hypothesis, we are stating that the difference we found was statistically significant, but we have mentioned several times that this tells us nothing about practical significance. To get an idea of the actual size of what we found, we can compute a new statistic called an effect size. Effect sizes give us an idea of how large, important, or meaningful a statistically significant effect is. For mean differences like we calculated here, our effect size is Cohen’s \(d\):

\[d=\dfrac{M-\mu}{\sigma} \]

This is very similar to our formula for \(z\), but we no longer take into account the sample size (since overly large samples can make it too easy to reject the null). Cohen’s \(d\) is interpreted in units of standard deviations, just like \(z\). For our example:

\[d=\dfrac{7.75-8.00}{0.50}=\dfrac{-0.25}{0.50}=0.50 \nonumber \]

Cohen’s \(d\) is interpreted as small, moderate, or large. Specifically, \(d\) = 0.20 is small, \(d\) = 0.50 is moderate, and \(d\) = 0.80 is large. Obviously values can fall in between these guidelines, so we should use our best judgment and the context of the problem to make our final interpretation of size. Our effect size happened to be exactly equal to one of these, so we say that there was a moderate effect.

Effect sizes are incredibly useful and provide important information and clarification that overcomes some of the weakness of hypothesis testing. Whenever you find a significant result, you should always calculate an effect size.

## Contributors and Attributions

Foster et al. (University of Missouri-St. Louis, Rice University, & University of Houston, Downtown Campus)