Pearson’s \(r\) is an incredibly flexible and useful statistic. Not only is it both descriptive and inferential, as we saw above, but because it is on a standardized metric (always between -1.00 and 1.00), it can also serve as its own effect size. In general, we use \(r\) = 0.10, \(r\) = 0.30, and \(r\) = 0.50 as our guidelines for small, medium, and large effects. Just like with Cohen’s \(d\), these guidelines are not absolutes, but they do serve as useful indicators in most situations. Notice as well that these are the same guidelines we used earlier to interpret the magnitude of the relation based on the correlation coefficient.
In addition to \(r\) being its own effect size, there is an additional effect size we can calculate for our results. This effect size is \(r^2\), and it is exactly what it looks like – it is the squared value of our correlation coefficient. Just like \(η^2\) in ANOVA, \(r^2\) is interpreted as the amount of variance explained in the outcome variance, and the cut scores are the same as well: 0.01, 0.09, and 0.25 for small, medium, and large, respectively. Notice here that these are the same cutoffs we used for regular \(r\) effect sizes, but squared (0.102 = 0.01, 0.302 = 0.09, 0.502 = 0.25) because, again, the \(r^2\) effect size is just the squared correlation, so its interpretation should be, and is, the same. The reason we use \(r^2\) as an effect size is because our ability to explain variance is often important to us.
The similarities between \(η^2\) and \(r^2\) in interpretation and magnitude should clue you in to the fact that they are similar analyses, even if they look nothing alike. That is because, behind the scenes, they actually are! In the next chapter, we will learn a technique called Linear Regression, which will formally link the two analyses together.
Foster et al. (University of Missouri-St. Louis, Rice University, & University of Houston, Downtown Campus)