2.4: Effect Sizes
At the end of this section you should be able to answer the following questions:
- What is an Effect Size?
- How is an Effect Size related to a p Value?
- How do you determine the magnitude of an Effect Size?
- What is the difference between s tatistical significance and practical significance?
Effect size is a term used to describe the strength or magnitude of an effect. This effect is usually expressed as a measure of difference or association. Like most statistical tests, effect sizes come in two distinct groups, and effect sizes generally range from 0 to 1.0. The first type of effect size is based on magnitude of difference between groups, and this is known as the d family of effect sizes. The second type of effect size is the measure of association or the variance accounted for by two or more variables, which is known as the r family of effect sizes.
For example, a t-test produces the effect size d, while a correlation coefficient produces the effect size r.
Generally, the effect size values such as d or r are only transformations of the difference between groups or associations between variables, which are then weighted (divided by) the size of the sample and/or its standard error.
Therefore, the higher the difference or association, and the greater the sample, the bigger the effect size.
Cohen (1988) suggests small, medium, and large effects sizes for a T-test would respectively be about .2, .5, and .8, while small, medium, and large effects sizes for the correlation coefficient would respectively be about .1, .3, and .5.
When reporting results from the calculation of a test statistic, it is always a good idea to report more than just the p value. It is far better, and more thorough, to include the effect size and the CI along with the p value result of the test.
As you can no doubt see from the information above, there are a number of effect sizes and they are associated with different kinds of statistical tests.
However, if you find a result that is statistically significant, but has a very small effect size, you must ask yourself if the use of variables and interventions producing those effects would be practical.
It is important to note that just because a researcher finds a statistically significant result, it does not mean the result is sizeable, important, or useful in the real world. Statistical significance is not the only measure of a result. A result should also be practically significant, which means the strength or size of the effect represents a finding that is practically important to others.
Practical significance always involves judgment by other researchers or consumers of research that takes into account factors such as cost and political considerations of interventions tied to the estimated effects.