Many, if not most experiments are designed to compare means. The experiment may involve only one sample mean that is to be compared to a specific value. Or the experiment could be testing differences among many different experimental conditions, and the experimenter could be interested in comparing each mean with each of the other means. This chapter covers methods of comparing means in many different experimental situations. The topics covered here in sections E, F, I, and J are typically covered in other texts in a chapter on Analysis of Variance. We prefer to cover them here since they bear no necessary relationship to analysis of variance. As discussed by Wilkinson (1999), it is not logical to consider the procedures in this chapter as tests to be performed subsequent to an analysis of variance. Nor is it logical to call them post-hoc tests as some computer programs do.
- 12.2: t Distribution Demo
- This demonstration allows you to compare the t distribution to the standard normal distribution.
- 12.3: Difference between Two Means
- It is much more common for a researcher to be interested in the difference between means than in the specific values of the means themselves. This section covers how to test for differences between means from two separate groups of subjects.
- 12.4: Robustness Simulation
- This demonstration allows you to explore the effects of violating the assumptions of normality and homogeneity of variance.
- 12.6: Specific Comparisons
- This section shows how to test these more complex comparisons. The methods in this section assume that the comparison among means was decided on before looking at the data. Therefore these comparisons are called planned comparisons. A different procedure is necessary for unplanned comparisons.
Thumbnail: Student's t-distribution with 2 degrees of freedom. (CC BY-SA 3.0; IkamusumeFan).