12.8: Post Hoc Tests
- Page ID
A post hoc test is used only after we find a statistically significant result and need to determine where our differences truly came from. The term “post hoc” comes from the Latin for “after the event”. There are many different post hoc tests that have been developed, and most of them will give us similar answers. We will only focus here on the most commonly used ones. We will also only discuss the concepts behind each and will not worry about calculations.
A Bonferroni test is perhaps the simplest post hoc analysis. A Bonferroni test is a series of \(t\)-tests performed on each pair of groups. As we discussed earlier, the number of groups quickly grows the number of comparisons, which inflates Type I error rates. To avoid this, a Bonferroni test divides our significance level \(α\) by the number of comparisons we are making so that when they are all run, they sum back up to our original Type I error rate. Once we have our new significance level, we simply run independent samples \(t\)-tests to look for differences between our pairs of groups. This adjustment is sometimes called a Bonferroni Correction, and it is easy to do by hand if we want to compare obtained \(p\)-values to our new corrected \(α\) level, but it is more difficult to do when using critical values like we do for our analyses so we will leave our discussion of it to that.
Tukey’s Honest Significant Difference
Tukey’s Honest Significant Difference (HSD) is a very popular post hoc analysis. This analysis, like Bonferroni’s, makes adjustments based on the number of comparisons, but it makes adjustments to the test statistic when running the comparisons of two groups. These comparisons give us an estimate of the difference between the groups and a confidence interval for the estimate. We use this confidence interval in the same way that we use a confidence interval for a regular independent samples \(t\)-test: if it contains 0.00, the groups are not different, but if it does not contain 0.00 then the groups are different.
Below are the differences between the group means and the Tukey’s HSD confidence intervals for the differences:
|Comparison||Difference||Tukey’s HSD CI|
|None vs Relevant||40.60||(28.87, 52.33)|
|None vs Unrelated||19.50||(7.77, 31.23)|
|Relevant vs Unrelated||21.10||(9.37, 32.83)|
As we can see, none of these intervals contain 0.00, so we can conclude that all three groups are different from one another.
Another common post hoc test is Scheffe’s Test. Like Tukey’s HSD, Scheffe’s test adjusts the test statistic for how many comparisons are made, but it does so in a slightly different way. The result is a test that is “conservative,” which means that it is less likely to commit a Type I Error, but this comes at the cost of less power to detect effects. We can see this by looking at the confidence intervals that Scheffe’s test gives us:
|Comparison||Difference||Tukey’s HSD CI|
|None vs Relevant||40.60||(28.35, 52.85)|
|None vs Unrelated||19.50||(7.25, 31.75)|
|Relevant vs Unrelated||21.10||(8.85, 33.35)|
As we can see, these are slightly wider than the intervals we got from Tukey’s HSD. This means that, all other things being equal, they are more likely to contain zero. In our case, however, the results are the same, and we again conclude that all three groups differ from one another.
There are many more post hoc tests than just these three, and they all approach the task in different ways, with some being more conservative and others being more powerful. In general, though, they will give highly similar answers. What is important here is to be able to interpret a post hoc analysis. If you are given post hoc analysis confidence intervals, like the ones seen above, read them the same way we read confidence intervals in chapter 10: if they contain zero, there is no difference; if they do not contain zero, there is a difference.
Contributors and Attributions
Foster et al. (University of Missouri-St. Louis, Rice University, & University of Houston, Downtown Campus)