When interpreting an experimental finding, a natural question arises as to whether the finding could have occurred by chance. Hypothesis testing is a statistical procedure for testing whether chance is a plausible explanation of an experimental finding. Misconceptions about hypothesis testing are common among practitioners as well as students. To help prevent these misconceptions, this chapter goes into more detail about the logic of hypothesis testing than is typical for an introductory-level text.
- 11.2: Significance Testing
- It is conventional to conclude the null hypothesis is false if the probability value is less than 0.05. More conservative researchers conclude the null hypothesis is false only if the probability value is less than 0.01. When a researcher concludes that the null hypothesis is false, the researcher is said to have rejected the null hypothesis. The probability value below which the null hypothesis is rejected is called the α (alpha) level or simply α. It is also called the significance level.
- 11.3: Type I and II Errors
- A Type I error occurs when a significance test results in the rejection of a true null hypothesis. A Type II error that can be made in significance testing is failing to reject a false null hypothesis. Unlike a Type I error, a Type II error is not really an error. When a statistical test is not significant, it means that the data do not provide strong evidence that the null hypothesis is false. Lack of significance does not support the conclusion that the null hypothesis is true.
- 11.4: One- and Two-Tailed Tests
- A probability calculated in only one tail of the distribution is called a "one-tailed probability" and probability calculated in both tails of a distribution is called a "two-tailed probability." You should always decide whether you are going to use a one-tailed or a two-tailed probability before looking at the data.
- 11.5: Significant Results
- When a probability value is below the α level, the effect is statistically significant and the null hypothes is is rejected. However, not all statistically significant effects should be treated the same way. For example, you should have less confidence that the null hypothesis is false if p = 0.049 than p = 0.003. Thus, rejecting the null hypothesis is not an all-or-none proposition.